00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2462 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3723 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.088 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.088 The recommended git tool is: git 00:00:00.089 using credential 00000000-0000-0000-0000-000000000002 00:00:00.091 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.118 Fetching changes from the remote Git repository 00:00:00.119 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.160 Using shallow fetch with depth 1 00:00:00.160 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.160 > git --version # timeout=10 00:00:00.196 > git --version # 'git version 2.39.2' 00:00:00.196 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.228 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.228 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.370 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.385 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.398 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.399 > git config core.sparsecheckout # timeout=10 00:00:06.410 > git read-tree -mu HEAD # timeout=10 00:00:06.428 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.449 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.449 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.546 [Pipeline] Start of Pipeline 00:00:06.557 [Pipeline] library 00:00:06.558 Loading library shm_lib@master 00:00:06.558 Library shm_lib@master is cached. Copying from home. 00:00:06.570 [Pipeline] node 00:00:06.583 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:06.584 [Pipeline] { 00:00:06.591 [Pipeline] catchError 00:00:06.592 [Pipeline] { 00:00:06.600 [Pipeline] wrap 00:00:06.608 [Pipeline] { 00:00:06.613 [Pipeline] stage 00:00:06.614 [Pipeline] { (Prologue) 00:00:06.626 [Pipeline] echo 00:00:06.628 Node: VM-host-SM9 00:00:06.633 [Pipeline] cleanWs 00:00:06.643 [WS-CLEANUP] Deleting project workspace... 00:00:06.643 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.649 [WS-CLEANUP] done 00:00:06.857 [Pipeline] setCustomBuildProperty 00:00:06.955 [Pipeline] httpRequest 00:00:07.298 [Pipeline] echo 00:00:07.300 Sorcerer 10.211.164.20 is alive 00:00:07.309 [Pipeline] retry 00:00:07.310 [Pipeline] { 00:00:07.320 [Pipeline] httpRequest 00:00:07.323 HttpMethod: GET 00:00:07.324 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.325 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.337 Response Code: HTTP/1.1 200 OK 00:00:07.338 Success: Status code 200 is in the accepted range: 200,404 00:00:07.338 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.660 [Pipeline] } 00:00:10.679 [Pipeline] // retry 00:00:10.687 [Pipeline] sh 00:00:10.972 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.986 [Pipeline] httpRequest 00:00:11.688 [Pipeline] echo 00:00:11.690 Sorcerer 10.211.164.20 is alive 00:00:11.699 [Pipeline] retry 00:00:11.701 [Pipeline] { 00:00:11.714 [Pipeline] httpRequest 00:00:11.718 HttpMethod: GET 00:00:11.718 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:11.719 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:11.740 Response Code: HTTP/1.1 200 OK 00:00:11.741 Success: Status code 200 is in the accepted range: 200,404 00:00:11.741 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:10.199 [Pipeline] } 00:01:10.215 [Pipeline] // retry 00:01:10.223 [Pipeline] sh 00:01:10.502 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:13.048 [Pipeline] sh 00:01:13.367 + git -C spdk log --oneline -n5 00:01:13.367 c13c99a5e test: Various fixes for Fedora40 00:01:13.367 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:13.367 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:13.367 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:13.367 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:13.422 [Pipeline] writeFile 00:01:13.438 [Pipeline] sh 00:01:13.720 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:13.732 [Pipeline] sh 00:01:14.013 + cat autorun-spdk.conf 00:01:14.013 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.013 SPDK_TEST_NVMF=1 00:01:14.013 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:14.013 SPDK_TEST_URING=1 00:01:14.013 SPDK_TEST_VFIOUSER=1 00:01:14.013 SPDK_TEST_USDT=1 00:01:14.013 SPDK_RUN_UBSAN=1 00:01:14.013 NET_TYPE=virt 00:01:14.013 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:14.020 RUN_NIGHTLY=1 00:01:14.022 [Pipeline] } 00:01:14.037 [Pipeline] // stage 00:01:14.053 [Pipeline] stage 00:01:14.056 [Pipeline] { (Run VM) 00:01:14.068 [Pipeline] sh 00:01:14.349 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:14.349 + echo 'Start stage prepare_nvme.sh' 00:01:14.349 Start stage prepare_nvme.sh 00:01:14.349 + [[ -n 3 ]] 00:01:14.349 + disk_prefix=ex3 00:01:14.349 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:14.349 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:14.349 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:14.349 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.349 ++ SPDK_TEST_NVMF=1 00:01:14.349 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:14.349 ++ SPDK_TEST_URING=1 00:01:14.349 ++ SPDK_TEST_VFIOUSER=1 00:01:14.349 ++ SPDK_TEST_USDT=1 00:01:14.349 ++ SPDK_RUN_UBSAN=1 00:01:14.349 ++ NET_TYPE=virt 00:01:14.349 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:14.349 ++ RUN_NIGHTLY=1 00:01:14.349 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:14.350 + nvme_files=() 00:01:14.350 + declare -A nvme_files 00:01:14.350 + backend_dir=/var/lib/libvirt/images/backends 00:01:14.350 + nvme_files['nvme.img']=5G 00:01:14.350 + nvme_files['nvme-cmb.img']=5G 00:01:14.350 + nvme_files['nvme-multi0.img']=4G 00:01:14.350 + nvme_files['nvme-multi1.img']=4G 00:01:14.350 + nvme_files['nvme-multi2.img']=4G 00:01:14.350 + nvme_files['nvme-openstack.img']=8G 00:01:14.350 + nvme_files['nvme-zns.img']=5G 00:01:14.350 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:14.350 + (( SPDK_TEST_FTL == 1 )) 00:01:14.350 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:14.350 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:14.350 + for nvme in "${!nvme_files[@]}" 00:01:14.350 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:01:14.350 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:14.350 + for nvme in "${!nvme_files[@]}" 00:01:14.350 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:01:14.350 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:14.350 + for nvme in "${!nvme_files[@]}" 00:01:14.350 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:01:14.350 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:14.350 + for nvme in "${!nvme_files[@]}" 00:01:14.350 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:01:14.350 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:14.350 + for nvme in "${!nvme_files[@]}" 00:01:14.350 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:01:14.350 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:14.350 + for nvme in "${!nvme_files[@]}" 00:01:14.350 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:01:14.350 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:14.350 + for nvme in "${!nvme_files[@]}" 00:01:14.350 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:01:14.609 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:14.609 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:01:14.609 + echo 'End stage prepare_nvme.sh' 00:01:14.609 End stage prepare_nvme.sh 00:01:14.620 [Pipeline] sh 00:01:14.900 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:14.901 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora39 00:01:14.901 00:01:14.901 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:14.901 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:14.901 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:14.901 HELP=0 00:01:14.901 DRY_RUN=0 00:01:14.901 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:01:14.901 NVME_DISKS_TYPE=nvme,nvme, 00:01:14.901 NVME_AUTO_CREATE=0 00:01:14.901 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:01:14.901 NVME_CMB=,, 00:01:14.901 NVME_PMR=,, 00:01:14.901 NVME_ZNS=,, 00:01:14.901 NVME_MS=,, 00:01:14.901 NVME_FDP=,, 00:01:14.901 SPDK_VAGRANT_DISTRO=fedora39 00:01:14.901 SPDK_VAGRANT_VMCPU=10 00:01:14.901 SPDK_VAGRANT_VMRAM=12288 00:01:14.901 SPDK_VAGRANT_PROVIDER=libvirt 00:01:14.901 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:14.901 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:14.901 SPDK_OPENSTACK_NETWORK=0 00:01:14.901 VAGRANT_PACKAGE_BOX=0 00:01:14.901 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:14.901 FORCE_DISTRO=true 00:01:14.901 VAGRANT_BOX_VERSION= 00:01:14.901 EXTRA_VAGRANTFILES= 00:01:14.901 NIC_MODEL=e1000 00:01:14.901 00:01:14.901 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:14.901 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:17.433 Bringing machine 'default' up with 'libvirt' provider... 00:01:18.001 ==> default: Creating image (snapshot of base box volume). 00:01:18.001 ==> default: Creating domain with the following settings... 00:01:18.001 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734157891_d70544eb1cf3d884fd33 00:01:18.001 ==> default: -- Domain type: kvm 00:01:18.001 ==> default: -- Cpus: 10 00:01:18.001 ==> default: -- Feature: acpi 00:01:18.001 ==> default: -- Feature: apic 00:01:18.001 ==> default: -- Feature: pae 00:01:18.001 ==> default: -- Memory: 12288M 00:01:18.001 ==> default: -- Memory Backing: hugepages: 00:01:18.001 ==> default: -- Management MAC: 00:01:18.001 ==> default: -- Loader: 00:01:18.001 ==> default: -- Nvram: 00:01:18.001 ==> default: -- Base box: spdk/fedora39 00:01:18.001 ==> default: -- Storage pool: default 00:01:18.001 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734157891_d70544eb1cf3d884fd33.img (20G) 00:01:18.001 ==> default: -- Volume Cache: default 00:01:18.001 ==> default: -- Kernel: 00:01:18.001 ==> default: -- Initrd: 00:01:18.001 ==> default: -- Graphics Type: vnc 00:01:18.001 ==> default: -- Graphics Port: -1 00:01:18.001 ==> default: -- Graphics IP: 127.0.0.1 00:01:18.001 ==> default: -- Graphics Password: Not defined 00:01:18.001 ==> default: -- Video Type: cirrus 00:01:18.001 ==> default: -- Video VRAM: 9216 00:01:18.001 ==> default: -- Sound Type: 00:01:18.001 ==> default: -- Keymap: en-us 00:01:18.001 ==> default: -- TPM Path: 00:01:18.001 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:18.001 ==> default: -- Command line args: 00:01:18.001 ==> default: -> value=-device, 00:01:18.001 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:18.001 ==> default: -> value=-drive, 00:01:18.001 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:01:18.001 ==> default: -> value=-device, 00:01:18.001 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:18.001 ==> default: -> value=-device, 00:01:18.001 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:18.001 ==> default: -> value=-drive, 00:01:18.001 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:18.001 ==> default: -> value=-device, 00:01:18.001 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:18.001 ==> default: -> value=-drive, 00:01:18.001 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:18.001 ==> default: -> value=-device, 00:01:18.001 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:18.001 ==> default: -> value=-drive, 00:01:18.001 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:18.001 ==> default: -> value=-device, 00:01:18.001 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:18.262 ==> default: Creating shared folders metadata... 00:01:18.262 ==> default: Starting domain. 00:01:19.642 ==> default: Waiting for domain to get an IP address... 00:01:34.521 ==> default: Waiting for SSH to become available... 00:01:35.897 ==> default: Configuring and enabling network interfaces... 00:01:40.087 default: SSH address: 192.168.121.90:22 00:01:40.087 default: SSH username: vagrant 00:01:40.087 default: SSH auth method: private key 00:01:41.991 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:50.106 ==> default: Mounting SSHFS shared folder... 00:01:51.041 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:51.041 ==> default: Checking Mount.. 00:01:51.975 ==> default: Folder Successfully Mounted! 00:01:51.975 ==> default: Running provisioner: file... 00:01:52.910 default: ~/.gitconfig => .gitconfig 00:01:53.171 00:01:53.171 SUCCESS! 00:01:53.171 00:01:53.171 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:53.171 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:53.171 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:53.171 00:01:53.179 [Pipeline] } 00:01:53.194 [Pipeline] // stage 00:01:53.202 [Pipeline] dir 00:01:53.203 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:01:53.204 [Pipeline] { 00:01:53.217 [Pipeline] catchError 00:01:53.218 [Pipeline] { 00:01:53.231 [Pipeline] sh 00:01:53.508 + vagrant ssh-config --host vagrant 00:01:53.508 + sed -ne /^Host/,$p 00:01:53.508 + tee ssh_conf 00:01:56.796 Host vagrant 00:01:56.796 HostName 192.168.121.90 00:01:56.796 User vagrant 00:01:56.796 Port 22 00:01:56.796 UserKnownHostsFile /dev/null 00:01:56.796 StrictHostKeyChecking no 00:01:56.796 PasswordAuthentication no 00:01:56.796 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:56.796 IdentitiesOnly yes 00:01:56.796 LogLevel FATAL 00:01:56.796 ForwardAgent yes 00:01:56.796 ForwardX11 yes 00:01:56.796 00:01:56.809 [Pipeline] withEnv 00:01:56.811 [Pipeline] { 00:01:56.827 [Pipeline] sh 00:01:57.107 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:57.107 source /etc/os-release 00:01:57.107 [[ -e /image.version ]] && img=$(< /image.version) 00:01:57.107 # Minimal, systemd-like check. 00:01:57.107 if [[ -e /.dockerenv ]]; then 00:01:57.107 # Clear garbage from the node's name: 00:01:57.107 # agt-er_autotest_547-896 -> autotest_547-896 00:01:57.107 # $HOSTNAME is the actual container id 00:01:57.107 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:57.107 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:57.107 # We can assume this is a mount from a host where container is running, 00:01:57.107 # so fetch its hostname to easily identify the target swarm worker. 00:01:57.107 container="$(< /etc/hostname) ($agent)" 00:01:57.107 else 00:01:57.107 # Fallback 00:01:57.107 container=$agent 00:01:57.107 fi 00:01:57.107 fi 00:01:57.107 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:57.107 00:01:57.378 [Pipeline] } 00:01:57.394 [Pipeline] // withEnv 00:01:57.402 [Pipeline] setCustomBuildProperty 00:01:57.416 [Pipeline] stage 00:01:57.418 [Pipeline] { (Tests) 00:01:57.434 [Pipeline] sh 00:01:57.714 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:57.986 [Pipeline] sh 00:01:58.266 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:58.540 [Pipeline] timeout 00:01:58.540 Timeout set to expire in 1 hr 0 min 00:01:58.542 [Pipeline] { 00:01:58.556 [Pipeline] sh 00:01:58.836 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:59.403 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:01:59.414 [Pipeline] sh 00:01:59.693 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:59.966 [Pipeline] sh 00:02:00.247 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:00.557 [Pipeline] sh 00:02:00.839 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:01.098 ++ readlink -f spdk_repo 00:02:01.098 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:01.098 + [[ -n /home/vagrant/spdk_repo ]] 00:02:01.098 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:01.098 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:01.098 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:01.098 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:01.098 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:01.098 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:01.098 + cd /home/vagrant/spdk_repo 00:02:01.098 + source /etc/os-release 00:02:01.098 ++ NAME='Fedora Linux' 00:02:01.098 ++ VERSION='39 (Cloud Edition)' 00:02:01.098 ++ ID=fedora 00:02:01.098 ++ VERSION_ID=39 00:02:01.098 ++ VERSION_CODENAME= 00:02:01.098 ++ PLATFORM_ID=platform:f39 00:02:01.098 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:01.098 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:01.098 ++ LOGO=fedora-logo-icon 00:02:01.098 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:01.098 ++ HOME_URL=https://fedoraproject.org/ 00:02:01.098 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:01.098 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:01.098 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:01.098 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:01.098 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:01.098 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:01.098 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:01.098 ++ SUPPORT_END=2024-11-12 00:02:01.098 ++ VARIANT='Cloud Edition' 00:02:01.098 ++ VARIANT_ID=cloud 00:02:01.098 + uname -a 00:02:01.098 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:01.098 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:01.098 Hugepages 00:02:01.098 node hugesize free / total 00:02:01.098 node0 1048576kB 0 / 0 00:02:01.098 node0 2048kB 0 / 0 00:02:01.098 00:02:01.098 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:01.098 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:01.098 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:01.098 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:01.098 + rm -f /tmp/spdk-ld-path 00:02:01.098 + source autorun-spdk.conf 00:02:01.098 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:01.098 ++ SPDK_TEST_NVMF=1 00:02:01.098 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:01.098 ++ SPDK_TEST_URING=1 00:02:01.098 ++ SPDK_TEST_VFIOUSER=1 00:02:01.098 ++ SPDK_TEST_USDT=1 00:02:01.098 ++ SPDK_RUN_UBSAN=1 00:02:01.098 ++ NET_TYPE=virt 00:02:01.098 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:01.098 ++ RUN_NIGHTLY=1 00:02:01.098 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:01.098 + [[ -n '' ]] 00:02:01.098 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:01.356 + for M in /var/spdk/build-*-manifest.txt 00:02:01.356 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:01.356 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:01.356 + for M in /var/spdk/build-*-manifest.txt 00:02:01.356 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:01.356 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:01.356 + for M in /var/spdk/build-*-manifest.txt 00:02:01.356 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:01.356 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:01.356 ++ uname 00:02:01.356 + [[ Linux == \L\i\n\u\x ]] 00:02:01.356 + sudo dmesg -T 00:02:01.356 + sudo dmesg --clear 00:02:01.356 + dmesg_pid=5232 00:02:01.356 + [[ Fedora Linux == FreeBSD ]] 00:02:01.356 + sudo dmesg -Tw 00:02:01.356 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:01.356 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:01.356 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:01.356 + [[ -x /usr/src/fio-static/fio ]] 00:02:01.356 + export FIO_BIN=/usr/src/fio-static/fio 00:02:01.356 + FIO_BIN=/usr/src/fio-static/fio 00:02:01.356 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:01.356 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:01.356 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:01.356 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:01.356 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:01.356 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:01.356 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:01.356 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:01.356 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:01.356 Test configuration: 00:02:01.356 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:01.356 SPDK_TEST_NVMF=1 00:02:01.356 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:01.356 SPDK_TEST_URING=1 00:02:01.356 SPDK_TEST_VFIOUSER=1 00:02:01.356 SPDK_TEST_USDT=1 00:02:01.356 SPDK_RUN_UBSAN=1 00:02:01.356 NET_TYPE=virt 00:02:01.356 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:01.356 RUN_NIGHTLY=1 06:32:15 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:01.356 06:32:15 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:01.356 06:32:15 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:01.356 06:32:15 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:01.356 06:32:15 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:01.356 06:32:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.357 06:32:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.357 06:32:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.357 06:32:15 -- paths/export.sh@5 -- $ export PATH 00:02:01.357 06:32:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:01.357 06:32:15 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:01.357 06:32:15 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:01.357 06:32:15 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734157935.XXXXXX 00:02:01.357 06:32:15 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734157935.naaS91 00:02:01.357 06:32:15 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:01.357 06:32:15 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:02:01.357 06:32:15 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:01.357 06:32:15 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:01.357 06:32:15 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:01.357 06:32:15 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:01.357 06:32:15 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:01.357 06:32:15 -- common/autotest_common.sh@10 -- $ set +x 00:02:01.357 06:32:15 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:02:01.357 06:32:15 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:01.357 06:32:15 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:01.357 06:32:15 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:01.357 06:32:15 -- spdk/autobuild.sh@16 -- $ date -u 00:02:01.357 Sat Dec 14 06:32:15 AM UTC 2024 00:02:01.357 06:32:15 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:01.615 LTS-67-gc13c99a5e 00:02:01.615 06:32:15 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:01.615 06:32:15 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:01.615 06:32:15 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:01.615 06:32:15 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:01.615 06:32:15 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:01.615 06:32:15 -- common/autotest_common.sh@10 -- $ set +x 00:02:01.615 ************************************ 00:02:01.615 START TEST ubsan 00:02:01.615 ************************************ 00:02:01.615 using ubsan 00:02:01.615 06:32:15 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:01.615 00:02:01.615 real 0m0.000s 00:02:01.615 user 0m0.000s 00:02:01.615 sys 0m0.000s 00:02:01.615 06:32:15 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:01.615 06:32:15 -- common/autotest_common.sh@10 -- $ set +x 00:02:01.615 ************************************ 00:02:01.615 END TEST ubsan 00:02:01.615 ************************************ 00:02:01.615 06:32:15 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:01.615 06:32:15 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:01.615 06:32:15 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:01.615 06:32:15 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:01.615 06:32:15 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:01.615 06:32:15 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:01.615 06:32:15 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:01.615 06:32:15 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:01.615 06:32:15 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-shared 00:02:01.873 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:01.873 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:02.131 Using 'verbs' RDMA provider 00:02:15.274 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:30.187 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:30.187 Creating mk/config.mk...done. 00:02:30.187 Creating mk/cc.flags.mk...done. 00:02:30.187 Type 'make' to build. 00:02:30.187 06:32:42 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:30.187 06:32:42 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:30.187 06:32:42 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:30.187 06:32:42 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.187 ************************************ 00:02:30.187 START TEST make 00:02:30.187 ************************************ 00:02:30.187 06:32:42 -- common/autotest_common.sh@1114 -- $ make -j10 00:02:30.187 make[1]: Nothing to be done for 'all'. 00:02:30.187 The Meson build system 00:02:30.187 Version: 1.5.0 00:02:30.187 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:02:30.187 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:30.187 Build type: native build 00:02:30.187 Project name: libvfio-user 00:02:30.187 Project version: 0.0.1 00:02:30.187 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:30.187 C linker for the host machine: cc ld.bfd 2.40-14 00:02:30.187 Host machine cpu family: x86_64 00:02:30.187 Host machine cpu: x86_64 00:02:30.187 Run-time dependency threads found: YES 00:02:30.187 Library dl found: YES 00:02:30.187 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:30.187 Run-time dependency json-c found: YES 0.17 00:02:30.187 Run-time dependency cmocka found: YES 1.1.7 00:02:30.187 Program pytest-3 found: NO 00:02:30.187 Program flake8 found: NO 00:02:30.187 Program misspell-fixer found: NO 00:02:30.187 Program restructuredtext-lint found: NO 00:02:30.187 Program valgrind found: YES (/usr/bin/valgrind) 00:02:30.187 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:30.187 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:30.187 Compiler for C supports arguments -Wwrite-strings: YES 00:02:30.187 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:30.187 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:02:30.188 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:02:30.188 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:30.188 Build targets in project: 8 00:02:30.188 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:30.188 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:30.188 00:02:30.188 libvfio-user 0.0.1 00:02:30.188 00:02:30.188 User defined options 00:02:30.188 buildtype : debug 00:02:30.188 default_library: shared 00:02:30.188 libdir : /usr/local/lib 00:02:30.188 00:02:30.188 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:30.754 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:31.012 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:31.012 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:31.012 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:31.012 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:31.012 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:31.012 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:31.012 [7/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:31.012 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:31.012 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:31.012 [10/37] Compiling C object samples/null.p/null.c.o 00:02:31.012 [11/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:31.012 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:31.012 [13/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:31.276 [14/37] Compiling C object samples/client.p/client.c.o 00:02:31.276 [15/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:31.276 [16/37] Linking target samples/client 00:02:31.276 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:31.277 [18/37] Compiling C object samples/server.p/server.c.o 00:02:31.277 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:31.277 [20/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:31.277 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:31.277 [22/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:31.277 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:31.277 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:31.277 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:31.277 [26/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:31.277 [27/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:31.277 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:31.277 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:31.535 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:31.535 [31/37] Linking target test/unit_tests 00:02:31.535 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:31.535 [33/37] Linking target samples/null 00:02:31.535 [34/37] Linking target samples/gpio-pci-idio-16 00:02:31.535 [35/37] Linking target samples/server 00:02:31.535 [36/37] Linking target samples/lspci 00:02:31.535 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:31.535 INFO: autodetecting backend as ninja 00:02:31.535 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:31.793 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:32.052 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:32.052 ninja: no work to do. 00:02:42.080 The Meson build system 00:02:42.080 Version: 1.5.0 00:02:42.080 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:42.080 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:42.080 Build type: native build 00:02:42.080 Program cat found: YES (/usr/bin/cat) 00:02:42.080 Project name: DPDK 00:02:42.080 Project version: 23.11.0 00:02:42.080 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:42.080 C linker for the host machine: cc ld.bfd 2.40-14 00:02:42.080 Host machine cpu family: x86_64 00:02:42.080 Host machine cpu: x86_64 00:02:42.080 Message: ## Building in Developer Mode ## 00:02:42.080 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:42.080 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:42.080 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:42.080 Program python3 found: YES (/usr/bin/python3) 00:02:42.080 Program cat found: YES (/usr/bin/cat) 00:02:42.080 Compiler for C supports arguments -march=native: YES 00:02:42.080 Checking for size of "void *" : 8 00:02:42.080 Checking for size of "void *" : 8 (cached) 00:02:42.080 Library m found: YES 00:02:42.080 Library numa found: YES 00:02:42.080 Has header "numaif.h" : YES 00:02:42.080 Library fdt found: NO 00:02:42.080 Library execinfo found: NO 00:02:42.080 Has header "execinfo.h" : YES 00:02:42.080 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:42.080 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:42.080 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:42.080 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:42.080 Run-time dependency openssl found: YES 3.1.1 00:02:42.080 Run-time dependency libpcap found: YES 1.10.4 00:02:42.080 Has header "pcap.h" with dependency libpcap: YES 00:02:42.080 Compiler for C supports arguments -Wcast-qual: YES 00:02:42.080 Compiler for C supports arguments -Wdeprecated: YES 00:02:42.080 Compiler for C supports arguments -Wformat: YES 00:02:42.080 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:42.080 Compiler for C supports arguments -Wformat-security: NO 00:02:42.080 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:42.080 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:42.080 Compiler for C supports arguments -Wnested-externs: YES 00:02:42.080 Compiler for C supports arguments -Wold-style-definition: YES 00:02:42.080 Compiler for C supports arguments -Wpointer-arith: YES 00:02:42.080 Compiler for C supports arguments -Wsign-compare: YES 00:02:42.080 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:42.080 Compiler for C supports arguments -Wundef: YES 00:02:42.080 Compiler for C supports arguments -Wwrite-strings: YES 00:02:42.080 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:42.080 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:42.080 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:42.080 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:42.080 Program objdump found: YES (/usr/bin/objdump) 00:02:42.080 Compiler for C supports arguments -mavx512f: YES 00:02:42.080 Checking if "AVX512 checking" compiles: YES 00:02:42.080 Fetching value of define "__SSE4_2__" : 1 00:02:42.080 Fetching value of define "__AES__" : 1 00:02:42.080 Fetching value of define "__AVX__" : 1 00:02:42.080 Fetching value of define "__AVX2__" : 1 00:02:42.080 Fetching value of define "__AVX512BW__" : (undefined) 00:02:42.080 Fetching value of define "__AVX512CD__" : (undefined) 00:02:42.080 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:42.080 Fetching value of define "__AVX512F__" : (undefined) 00:02:42.080 Fetching value of define "__AVX512VL__" : (undefined) 00:02:42.080 Fetching value of define "__PCLMUL__" : 1 00:02:42.080 Fetching value of define "__RDRND__" : 1 00:02:42.080 Fetching value of define "__RDSEED__" : 1 00:02:42.080 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:42.080 Fetching value of define "__znver1__" : (undefined) 00:02:42.080 Fetching value of define "__znver2__" : (undefined) 00:02:42.080 Fetching value of define "__znver3__" : (undefined) 00:02:42.080 Fetching value of define "__znver4__" : (undefined) 00:02:42.080 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:42.080 Message: lib/log: Defining dependency "log" 00:02:42.080 Message: lib/kvargs: Defining dependency "kvargs" 00:02:42.080 Message: lib/telemetry: Defining dependency "telemetry" 00:02:42.080 Checking for function "getentropy" : NO 00:02:42.080 Message: lib/eal: Defining dependency "eal" 00:02:42.080 Message: lib/ring: Defining dependency "ring" 00:02:42.080 Message: lib/rcu: Defining dependency "rcu" 00:02:42.080 Message: lib/mempool: Defining dependency "mempool" 00:02:42.080 Message: lib/mbuf: Defining dependency "mbuf" 00:02:42.080 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:42.080 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:42.080 Compiler for C supports arguments -mpclmul: YES 00:02:42.080 Compiler for C supports arguments -maes: YES 00:02:42.081 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:42.081 Compiler for C supports arguments -mavx512bw: YES 00:02:42.081 Compiler for C supports arguments -mavx512dq: YES 00:02:42.081 Compiler for C supports arguments -mavx512vl: YES 00:02:42.081 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:42.081 Compiler for C supports arguments -mavx2: YES 00:02:42.081 Compiler for C supports arguments -mavx: YES 00:02:42.081 Message: lib/net: Defining dependency "net" 00:02:42.081 Message: lib/meter: Defining dependency "meter" 00:02:42.081 Message: lib/ethdev: Defining dependency "ethdev" 00:02:42.081 Message: lib/pci: Defining dependency "pci" 00:02:42.081 Message: lib/cmdline: Defining dependency "cmdline" 00:02:42.081 Message: lib/hash: Defining dependency "hash" 00:02:42.081 Message: lib/timer: Defining dependency "timer" 00:02:42.081 Message: lib/compressdev: Defining dependency "compressdev" 00:02:42.081 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:42.081 Message: lib/dmadev: Defining dependency "dmadev" 00:02:42.081 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:42.081 Message: lib/power: Defining dependency "power" 00:02:42.081 Message: lib/reorder: Defining dependency "reorder" 00:02:42.081 Message: lib/security: Defining dependency "security" 00:02:42.081 Has header "linux/userfaultfd.h" : YES 00:02:42.081 Has header "linux/vduse.h" : YES 00:02:42.081 Message: lib/vhost: Defining dependency "vhost" 00:02:42.081 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:42.081 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:42.081 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:42.081 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:42.081 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:42.081 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:42.081 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:42.081 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:42.081 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:42.081 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:42.081 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:42.081 Configuring doxy-api-html.conf using configuration 00:02:42.081 Configuring doxy-api-man.conf using configuration 00:02:42.081 Program mandb found: YES (/usr/bin/mandb) 00:02:42.081 Program sphinx-build found: NO 00:02:42.081 Configuring rte_build_config.h using configuration 00:02:42.081 Message: 00:02:42.081 ================= 00:02:42.081 Applications Enabled 00:02:42.081 ================= 00:02:42.081 00:02:42.081 apps: 00:02:42.081 00:02:42.081 00:02:42.081 Message: 00:02:42.081 ================= 00:02:42.081 Libraries Enabled 00:02:42.081 ================= 00:02:42.081 00:02:42.081 libs: 00:02:42.081 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:42.081 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:42.081 cryptodev, dmadev, power, reorder, security, vhost, 00:02:42.081 00:02:42.081 Message: 00:02:42.081 =============== 00:02:42.081 Drivers Enabled 00:02:42.081 =============== 00:02:42.081 00:02:42.081 common: 00:02:42.081 00:02:42.081 bus: 00:02:42.081 pci, vdev, 00:02:42.081 mempool: 00:02:42.081 ring, 00:02:42.081 dma: 00:02:42.081 00:02:42.081 net: 00:02:42.081 00:02:42.081 crypto: 00:02:42.081 00:02:42.081 compress: 00:02:42.081 00:02:42.081 vdpa: 00:02:42.081 00:02:42.081 00:02:42.081 Message: 00:02:42.081 ================= 00:02:42.081 Content Skipped 00:02:42.081 ================= 00:02:42.081 00:02:42.081 apps: 00:02:42.081 dumpcap: explicitly disabled via build config 00:02:42.081 graph: explicitly disabled via build config 00:02:42.081 pdump: explicitly disabled via build config 00:02:42.081 proc-info: explicitly disabled via build config 00:02:42.081 test-acl: explicitly disabled via build config 00:02:42.081 test-bbdev: explicitly disabled via build config 00:02:42.081 test-cmdline: explicitly disabled via build config 00:02:42.081 test-compress-perf: explicitly disabled via build config 00:02:42.081 test-crypto-perf: explicitly disabled via build config 00:02:42.081 test-dma-perf: explicitly disabled via build config 00:02:42.081 test-eventdev: explicitly disabled via build config 00:02:42.081 test-fib: explicitly disabled via build config 00:02:42.081 test-flow-perf: explicitly disabled via build config 00:02:42.081 test-gpudev: explicitly disabled via build config 00:02:42.081 test-mldev: explicitly disabled via build config 00:02:42.081 test-pipeline: explicitly disabled via build config 00:02:42.081 test-pmd: explicitly disabled via build config 00:02:42.081 test-regex: explicitly disabled via build config 00:02:42.081 test-sad: explicitly disabled via build config 00:02:42.081 test-security-perf: explicitly disabled via build config 00:02:42.081 00:02:42.081 libs: 00:02:42.081 metrics: explicitly disabled via build config 00:02:42.081 acl: explicitly disabled via build config 00:02:42.081 bbdev: explicitly disabled via build config 00:02:42.081 bitratestats: explicitly disabled via build config 00:02:42.081 bpf: explicitly disabled via build config 00:02:42.081 cfgfile: explicitly disabled via build config 00:02:42.081 distributor: explicitly disabled via build config 00:02:42.081 efd: explicitly disabled via build config 00:02:42.081 eventdev: explicitly disabled via build config 00:02:42.081 dispatcher: explicitly disabled via build config 00:02:42.081 gpudev: explicitly disabled via build config 00:02:42.081 gro: explicitly disabled via build config 00:02:42.081 gso: explicitly disabled via build config 00:02:42.081 ip_frag: explicitly disabled via build config 00:02:42.081 jobstats: explicitly disabled via build config 00:02:42.081 latencystats: explicitly disabled via build config 00:02:42.081 lpm: explicitly disabled via build config 00:02:42.081 member: explicitly disabled via build config 00:02:42.081 pcapng: explicitly disabled via build config 00:02:42.081 rawdev: explicitly disabled via build config 00:02:42.081 regexdev: explicitly disabled via build config 00:02:42.081 mldev: explicitly disabled via build config 00:02:42.081 rib: explicitly disabled via build config 00:02:42.081 sched: explicitly disabled via build config 00:02:42.081 stack: explicitly disabled via build config 00:02:42.081 ipsec: explicitly disabled via build config 00:02:42.081 pdcp: explicitly disabled via build config 00:02:42.081 fib: explicitly disabled via build config 00:02:42.081 port: explicitly disabled via build config 00:02:42.081 pdump: explicitly disabled via build config 00:02:42.081 table: explicitly disabled via build config 00:02:42.081 pipeline: explicitly disabled via build config 00:02:42.081 graph: explicitly disabled via build config 00:02:42.081 node: explicitly disabled via build config 00:02:42.081 00:02:42.081 drivers: 00:02:42.081 common/cpt: not in enabled drivers build config 00:02:42.081 common/dpaax: not in enabled drivers build config 00:02:42.081 common/iavf: not in enabled drivers build config 00:02:42.081 common/idpf: not in enabled drivers build config 00:02:42.081 common/mvep: not in enabled drivers build config 00:02:42.081 common/octeontx: not in enabled drivers build config 00:02:42.081 bus/auxiliary: not in enabled drivers build config 00:02:42.081 bus/cdx: not in enabled drivers build config 00:02:42.081 bus/dpaa: not in enabled drivers build config 00:02:42.081 bus/fslmc: not in enabled drivers build config 00:02:42.081 bus/ifpga: not in enabled drivers build config 00:02:42.081 bus/platform: not in enabled drivers build config 00:02:42.081 bus/vmbus: not in enabled drivers build config 00:02:42.081 common/cnxk: not in enabled drivers build config 00:02:42.081 common/mlx5: not in enabled drivers build config 00:02:42.081 common/nfp: not in enabled drivers build config 00:02:42.081 common/qat: not in enabled drivers build config 00:02:42.081 common/sfc_efx: not in enabled drivers build config 00:02:42.081 mempool/bucket: not in enabled drivers build config 00:02:42.081 mempool/cnxk: not in enabled drivers build config 00:02:42.081 mempool/dpaa: not in enabled drivers build config 00:02:42.081 mempool/dpaa2: not in enabled drivers build config 00:02:42.081 mempool/octeontx: not in enabled drivers build config 00:02:42.081 mempool/stack: not in enabled drivers build config 00:02:42.081 dma/cnxk: not in enabled drivers build config 00:02:42.081 dma/dpaa: not in enabled drivers build config 00:02:42.081 dma/dpaa2: not in enabled drivers build config 00:02:42.081 dma/hisilicon: not in enabled drivers build config 00:02:42.081 dma/idxd: not in enabled drivers build config 00:02:42.081 dma/ioat: not in enabled drivers build config 00:02:42.081 dma/skeleton: not in enabled drivers build config 00:02:42.081 net/af_packet: not in enabled drivers build config 00:02:42.081 net/af_xdp: not in enabled drivers build config 00:02:42.081 net/ark: not in enabled drivers build config 00:02:42.081 net/atlantic: not in enabled drivers build config 00:02:42.081 net/avp: not in enabled drivers build config 00:02:42.081 net/axgbe: not in enabled drivers build config 00:02:42.081 net/bnx2x: not in enabled drivers build config 00:02:42.081 net/bnxt: not in enabled drivers build config 00:02:42.081 net/bonding: not in enabled drivers build config 00:02:42.081 net/cnxk: not in enabled drivers build config 00:02:42.081 net/cpfl: not in enabled drivers build config 00:02:42.081 net/cxgbe: not in enabled drivers build config 00:02:42.081 net/dpaa: not in enabled drivers build config 00:02:42.081 net/dpaa2: not in enabled drivers build config 00:02:42.081 net/e1000: not in enabled drivers build config 00:02:42.081 net/ena: not in enabled drivers build config 00:02:42.081 net/enetc: not in enabled drivers build config 00:02:42.081 net/enetfec: not in enabled drivers build config 00:02:42.081 net/enic: not in enabled drivers build config 00:02:42.081 net/failsafe: not in enabled drivers build config 00:02:42.081 net/fm10k: not in enabled drivers build config 00:02:42.081 net/gve: not in enabled drivers build config 00:02:42.081 net/hinic: not in enabled drivers build config 00:02:42.081 net/hns3: not in enabled drivers build config 00:02:42.081 net/i40e: not in enabled drivers build config 00:02:42.081 net/iavf: not in enabled drivers build config 00:02:42.081 net/ice: not in enabled drivers build config 00:02:42.081 net/idpf: not in enabled drivers build config 00:02:42.081 net/igc: not in enabled drivers build config 00:02:42.081 net/ionic: not in enabled drivers build config 00:02:42.081 net/ipn3ke: not in enabled drivers build config 00:02:42.081 net/ixgbe: not in enabled drivers build config 00:02:42.082 net/mana: not in enabled drivers build config 00:02:42.082 net/memif: not in enabled drivers build config 00:02:42.082 net/mlx4: not in enabled drivers build config 00:02:42.082 net/mlx5: not in enabled drivers build config 00:02:42.082 net/mvneta: not in enabled drivers build config 00:02:42.082 net/mvpp2: not in enabled drivers build config 00:02:42.082 net/netvsc: not in enabled drivers build config 00:02:42.082 net/nfb: not in enabled drivers build config 00:02:42.082 net/nfp: not in enabled drivers build config 00:02:42.082 net/ngbe: not in enabled drivers build config 00:02:42.082 net/null: not in enabled drivers build config 00:02:42.082 net/octeontx: not in enabled drivers build config 00:02:42.082 net/octeon_ep: not in enabled drivers build config 00:02:42.082 net/pcap: not in enabled drivers build config 00:02:42.082 net/pfe: not in enabled drivers build config 00:02:42.082 net/qede: not in enabled drivers build config 00:02:42.082 net/ring: not in enabled drivers build config 00:02:42.082 net/sfc: not in enabled drivers build config 00:02:42.082 net/softnic: not in enabled drivers build config 00:02:42.082 net/tap: not in enabled drivers build config 00:02:42.082 net/thunderx: not in enabled drivers build config 00:02:42.082 net/txgbe: not in enabled drivers build config 00:02:42.082 net/vdev_netvsc: not in enabled drivers build config 00:02:42.082 net/vhost: not in enabled drivers build config 00:02:42.082 net/virtio: not in enabled drivers build config 00:02:42.082 net/vmxnet3: not in enabled drivers build config 00:02:42.082 raw/*: missing internal dependency, "rawdev" 00:02:42.082 crypto/armv8: not in enabled drivers build config 00:02:42.082 crypto/bcmfs: not in enabled drivers build config 00:02:42.082 crypto/caam_jr: not in enabled drivers build config 00:02:42.082 crypto/ccp: not in enabled drivers build config 00:02:42.082 crypto/cnxk: not in enabled drivers build config 00:02:42.082 crypto/dpaa_sec: not in enabled drivers build config 00:02:42.082 crypto/dpaa2_sec: not in enabled drivers build config 00:02:42.082 crypto/ipsec_mb: not in enabled drivers build config 00:02:42.082 crypto/mlx5: not in enabled drivers build config 00:02:42.082 crypto/mvsam: not in enabled drivers build config 00:02:42.082 crypto/nitrox: not in enabled drivers build config 00:02:42.082 crypto/null: not in enabled drivers build config 00:02:42.082 crypto/octeontx: not in enabled drivers build config 00:02:42.082 crypto/openssl: not in enabled drivers build config 00:02:42.082 crypto/scheduler: not in enabled drivers build config 00:02:42.082 crypto/uadk: not in enabled drivers build config 00:02:42.082 crypto/virtio: not in enabled drivers build config 00:02:42.082 compress/isal: not in enabled drivers build config 00:02:42.082 compress/mlx5: not in enabled drivers build config 00:02:42.082 compress/octeontx: not in enabled drivers build config 00:02:42.082 compress/zlib: not in enabled drivers build config 00:02:42.082 regex/*: missing internal dependency, "regexdev" 00:02:42.082 ml/*: missing internal dependency, "mldev" 00:02:42.082 vdpa/ifc: not in enabled drivers build config 00:02:42.082 vdpa/mlx5: not in enabled drivers build config 00:02:42.082 vdpa/nfp: not in enabled drivers build config 00:02:42.082 vdpa/sfc: not in enabled drivers build config 00:02:42.082 event/*: missing internal dependency, "eventdev" 00:02:42.082 baseband/*: missing internal dependency, "bbdev" 00:02:42.082 gpu/*: missing internal dependency, "gpudev" 00:02:42.082 00:02:42.082 00:02:42.082 Build targets in project: 85 00:02:42.082 00:02:42.082 DPDK 23.11.0 00:02:42.082 00:02:42.082 User defined options 00:02:42.082 buildtype : debug 00:02:42.082 default_library : shared 00:02:42.082 libdir : lib 00:02:42.082 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:42.082 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:42.082 c_link_args : 00:02:42.082 cpu_instruction_set: native 00:02:42.082 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:42.082 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:42.082 enable_docs : false 00:02:42.082 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:42.082 enable_kmods : false 00:02:42.082 tests : false 00:02:42.082 00:02:42.082 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:42.082 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:42.082 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:42.082 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:42.082 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:42.082 [4/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:42.082 [5/265] Linking static target lib/librte_kvargs.a 00:02:42.082 [6/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:42.082 [7/265] Linking static target lib/librte_log.a 00:02:42.082 [8/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:42.082 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:42.082 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:42.082 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.341 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:42.341 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:42.599 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:42.599 [15/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:42.599 [16/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:42.599 [17/265] Linking static target lib/librte_telemetry.a 00:02:42.599 [18/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.599 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:42.599 [20/265] Linking target lib/librte_log.so.24.0 00:02:42.857 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:42.857 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:42.857 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:43.115 [24/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:43.115 [25/265] Linking target lib/librte_kvargs.so.24.0 00:02:43.115 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:43.372 [27/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:43.372 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:43.372 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:43.630 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:43.630 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:43.630 [32/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.630 [33/265] Linking target lib/librte_telemetry.so.24.0 00:02:43.630 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:43.630 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:43.889 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:43.889 [37/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:43.889 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:44.148 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:44.148 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:44.148 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:44.148 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:44.148 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:44.148 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:44.406 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:44.664 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:44.664 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:44.664 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:44.664 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:44.664 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:44.923 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:44.923 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:45.181 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:45.181 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:45.181 [55/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:45.181 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:45.440 [57/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:45.440 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:45.440 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:45.440 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:45.440 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:45.440 [62/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:45.698 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:45.698 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:45.698 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:45.956 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:45.956 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:46.214 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:46.214 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:46.214 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:46.473 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:46.473 [72/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:46.473 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:46.473 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:46.473 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:46.473 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:46.731 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:46.731 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:46.731 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:46.988 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:46.988 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:47.247 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:47.247 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:47.505 [84/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:47.505 [85/265] Linking static target lib/librte_ring.a 00:02:47.505 [86/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:47.505 [87/265] Linking static target lib/librte_eal.a 00:02:47.764 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:47.764 [89/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:47.764 [90/265] Linking static target lib/librte_rcu.a 00:02:47.764 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:47.764 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:48.022 [93/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:48.022 [94/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.022 [95/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:48.022 [96/265] Linking static target lib/librte_mempool.a 00:02:48.022 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:48.022 [98/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:48.022 [99/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.022 [100/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:48.280 [101/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:48.539 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:48.539 [103/265] Linking static target lib/librte_mbuf.a 00:02:48.539 [104/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:48.539 [105/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:48.539 [106/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:48.798 [107/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:48.798 [108/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:48.798 [109/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:48.798 [110/265] Linking static target lib/librte_net.a 00:02:48.798 [111/265] Linking static target lib/librte_meter.a 00:02:49.056 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:49.314 [113/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.314 [114/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.314 [115/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.314 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:49.314 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:49.314 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:49.572 [119/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.830 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:50.088 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:50.088 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:50.346 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:50.346 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:50.346 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:50.346 [126/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:50.346 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:50.346 [128/265] Linking static target lib/librte_pci.a 00:02:50.346 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:50.604 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:50.604 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:50.605 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:50.605 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:50.605 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:50.605 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:50.863 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:50.863 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:50.863 [138/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.863 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:50.863 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:50.863 [141/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:50.863 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:50.863 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:51.122 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:51.122 [145/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:51.122 [146/265] Linking static target lib/librte_cmdline.a 00:02:51.379 [147/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:51.379 [148/265] Linking static target lib/librte_ethdev.a 00:02:51.379 [149/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:51.379 [150/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:51.379 [151/265] Linking static target lib/librte_timer.a 00:02:51.637 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:51.637 [153/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:51.895 [154/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:51.895 [155/265] Linking static target lib/librte_compressdev.a 00:02:51.895 [156/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:52.153 [157/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:52.153 [158/265] Linking static target lib/librte_hash.a 00:02:52.153 [159/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.153 [160/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:52.153 [161/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:52.412 [162/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:52.670 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:52.670 [164/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:52.670 [165/265] Linking static target lib/librte_dmadev.a 00:02:52.670 [166/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:52.670 [167/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:52.928 [168/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:52.928 [169/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.928 [170/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.928 [171/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:52.928 [172/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:52.928 [173/265] Linking static target lib/librte_cryptodev.a 00:02:53.187 [174/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.444 [175/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:53.444 [176/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.444 [177/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:53.444 [178/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:53.444 [179/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:53.702 [180/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:53.702 [181/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:53.960 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:53.960 [183/265] Linking static target lib/librte_power.a 00:02:54.218 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:54.218 [185/265] Linking static target lib/librte_reorder.a 00:02:54.218 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:54.218 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:54.477 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:54.477 [189/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:54.477 [190/265] Linking static target lib/librte_security.a 00:02:54.477 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:54.735 [192/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.992 [193/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.992 [194/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.250 [195/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:55.250 [196/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:55.250 [197/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.250 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:55.250 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:55.508 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:55.767 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:55.767 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:55.767 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:55.767 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:55.767 [205/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:55.767 [206/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:56.024 [207/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:56.024 [208/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:56.024 [209/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:56.024 [210/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:56.024 [211/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:56.024 [212/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:56.024 [213/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:56.024 [214/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:56.024 [215/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:56.024 [216/265] Linking static target drivers/librte_bus_vdev.a 00:02:56.024 [217/265] Linking static target drivers/librte_bus_pci.a 00:02:56.282 [218/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:56.282 [219/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:56.282 [220/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.592 [221/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:56.592 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:56.592 [223/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:56.592 [224/265] Linking static target drivers/librte_mempool_ring.a 00:02:56.592 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.527 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:57.527 [227/265] Linking static target lib/librte_vhost.a 00:02:58.462 [228/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.462 [229/265] Linking target lib/librte_eal.so.24.0 00:02:58.721 [230/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:58.721 [231/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.721 [232/265] Linking target lib/librte_ring.so.24.0 00:02:58.721 [233/265] Linking target lib/librte_pci.so.24.0 00:02:58.721 [234/265] Linking target lib/librte_timer.so.24.0 00:02:58.721 [235/265] Linking target lib/librte_dmadev.so.24.0 00:02:58.721 [236/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:58.721 [237/265] Linking target lib/librte_meter.so.24.0 00:02:58.721 [238/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:58.721 [239/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:58.721 [240/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:58.721 [241/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:58.721 [242/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:58.721 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:58.721 [244/265] Linking target lib/librte_rcu.so.24.0 00:02:58.980 [245/265] Linking target lib/librte_mempool.so.24.0 00:02:58.980 [246/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.980 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:58.980 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:58.980 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:58.980 [250/265] Linking target lib/librte_mbuf.so.24.0 00:02:59.239 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:59.239 [252/265] Linking target lib/librte_cryptodev.so.24.0 00:02:59.239 [253/265] Linking target lib/librte_reorder.so.24.0 00:02:59.239 [254/265] Linking target lib/librte_compressdev.so.24.0 00:02:59.239 [255/265] Linking target lib/librte_net.so.24.0 00:02:59.498 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:59.498 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:59.498 [258/265] Linking target lib/librte_security.so.24.0 00:02:59.498 [259/265] Linking target lib/librte_cmdline.so.24.0 00:02:59.498 [260/265] Linking target lib/librte_hash.so.24.0 00:02:59.498 [261/265] Linking target lib/librte_ethdev.so.24.0 00:02:59.757 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:59.757 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:59.757 [264/265] Linking target lib/librte_power.so.24.0 00:02:59.757 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:59.757 INFO: autodetecting backend as ninja 00:02:59.757 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:01.135 CC lib/ut_mock/mock.o 00:03:01.135 CC lib/ut/ut.o 00:03:01.135 CC lib/log/log_deprecated.o 00:03:01.135 CC lib/log/log.o 00:03:01.135 CC lib/log/log_flags.o 00:03:01.135 LIB libspdk_ut_mock.a 00:03:01.135 LIB libspdk_ut.a 00:03:01.135 SO libspdk_ut_mock.so.5.0 00:03:01.135 SO libspdk_ut.so.1.0 00:03:01.135 LIB libspdk_log.a 00:03:01.135 SO libspdk_log.so.6.1 00:03:01.135 SYMLINK libspdk_ut.so 00:03:01.135 SYMLINK libspdk_ut_mock.so 00:03:01.135 SYMLINK libspdk_log.so 00:03:01.393 CXX lib/trace_parser/trace.o 00:03:01.393 CC lib/dma/dma.o 00:03:01.393 CC lib/util/base64.o 00:03:01.393 CC lib/util/bit_array.o 00:03:01.393 CC lib/util/cpuset.o 00:03:01.393 CC lib/util/crc16.o 00:03:01.393 CC lib/util/crc32c.o 00:03:01.393 CC lib/util/crc32.o 00:03:01.393 CC lib/ioat/ioat.o 00:03:01.393 CC lib/vfio_user/host/vfio_user_pci.o 00:03:01.393 CC lib/util/crc32_ieee.o 00:03:01.652 CC lib/util/crc64.o 00:03:01.652 CC lib/util/dif.o 00:03:01.652 CC lib/util/fd.o 00:03:01.652 LIB libspdk_dma.a 00:03:01.652 CC lib/util/file.o 00:03:01.652 SO libspdk_dma.so.3.0 00:03:01.652 CC lib/util/hexlify.o 00:03:01.652 SYMLINK libspdk_dma.so 00:03:01.652 CC lib/util/iov.o 00:03:01.652 CC lib/util/math.o 00:03:01.652 CC lib/util/pipe.o 00:03:01.652 LIB libspdk_ioat.a 00:03:01.652 CC lib/util/strerror_tls.o 00:03:01.652 CC lib/vfio_user/host/vfio_user.o 00:03:01.652 SO libspdk_ioat.so.6.0 00:03:01.652 CC lib/util/string.o 00:03:01.911 CC lib/util/uuid.o 00:03:01.911 SYMLINK libspdk_ioat.so 00:03:01.911 CC lib/util/fd_group.o 00:03:01.911 CC lib/util/xor.o 00:03:01.911 CC lib/util/zipf.o 00:03:01.911 LIB libspdk_vfio_user.a 00:03:01.911 SO libspdk_vfio_user.so.4.0 00:03:01.911 SYMLINK libspdk_vfio_user.so 00:03:02.170 LIB libspdk_util.a 00:03:02.170 SO libspdk_util.so.8.0 00:03:02.429 SYMLINK libspdk_util.so 00:03:02.429 LIB libspdk_trace_parser.a 00:03:02.429 SO libspdk_trace_parser.so.4.0 00:03:02.429 CC lib/json/json_parse.o 00:03:02.429 CC lib/json/json_util.o 00:03:02.429 CC lib/json/json_write.o 00:03:02.429 CC lib/conf/conf.o 00:03:02.429 CC lib/rdma/common.o 00:03:02.429 CC lib/env_dpdk/env.o 00:03:02.429 CC lib/rdma/rdma_verbs.o 00:03:02.429 CC lib/vmd/vmd.o 00:03:02.429 CC lib/idxd/idxd.o 00:03:02.429 SYMLINK libspdk_trace_parser.so 00:03:02.429 CC lib/idxd/idxd_user.o 00:03:02.688 LIB libspdk_conf.a 00:03:02.688 CC lib/idxd/idxd_kernel.o 00:03:02.688 CC lib/vmd/led.o 00:03:02.688 CC lib/env_dpdk/memory.o 00:03:02.688 SO libspdk_conf.so.5.0 00:03:02.688 LIB libspdk_rdma.a 00:03:02.688 CC lib/env_dpdk/pci.o 00:03:02.688 LIB libspdk_json.a 00:03:02.688 SYMLINK libspdk_conf.so 00:03:02.688 SO libspdk_rdma.so.5.0 00:03:02.688 CC lib/env_dpdk/init.o 00:03:02.947 SO libspdk_json.so.5.1 00:03:02.947 SYMLINK libspdk_rdma.so 00:03:02.947 CC lib/env_dpdk/threads.o 00:03:02.947 CC lib/env_dpdk/pci_ioat.o 00:03:02.947 CC lib/env_dpdk/pci_virtio.o 00:03:02.947 SYMLINK libspdk_json.so 00:03:02.947 CC lib/env_dpdk/pci_vmd.o 00:03:02.947 CC lib/env_dpdk/pci_idxd.o 00:03:02.947 CC lib/env_dpdk/pci_event.o 00:03:02.947 CC lib/env_dpdk/sigbus_handler.o 00:03:02.947 CC lib/env_dpdk/pci_dpdk.o 00:03:02.947 LIB libspdk_idxd.a 00:03:02.947 SO libspdk_idxd.so.11.0 00:03:03.206 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:03.206 SYMLINK libspdk_idxd.so 00:03:03.206 LIB libspdk_vmd.a 00:03:03.206 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:03.206 SO libspdk_vmd.so.5.0 00:03:03.206 SYMLINK libspdk_vmd.so 00:03:03.206 CC lib/jsonrpc/jsonrpc_server.o 00:03:03.206 CC lib/jsonrpc/jsonrpc_client.o 00:03:03.206 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:03.206 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:03.465 LIB libspdk_jsonrpc.a 00:03:03.465 SO libspdk_jsonrpc.so.5.1 00:03:03.724 SYMLINK libspdk_jsonrpc.so 00:03:03.724 CC lib/rpc/rpc.o 00:03:03.983 LIB libspdk_env_dpdk.a 00:03:03.983 SO libspdk_env_dpdk.so.13.0 00:03:03.983 LIB libspdk_rpc.a 00:03:03.983 SO libspdk_rpc.so.5.0 00:03:03.983 SYMLINK libspdk_rpc.so 00:03:03.983 SYMLINK libspdk_env_dpdk.so 00:03:04.242 CC lib/trace/trace.o 00:03:04.242 CC lib/trace/trace_flags.o 00:03:04.242 CC lib/trace/trace_rpc.o 00:03:04.242 CC lib/notify/notify.o 00:03:04.242 CC lib/notify/notify_rpc.o 00:03:04.242 CC lib/sock/sock_rpc.o 00:03:04.242 CC lib/sock/sock.o 00:03:04.501 LIB libspdk_notify.a 00:03:04.501 LIB libspdk_trace.a 00:03:04.501 SO libspdk_notify.so.5.0 00:03:04.501 SO libspdk_trace.so.9.0 00:03:04.501 SYMLINK libspdk_notify.so 00:03:04.501 SYMLINK libspdk_trace.so 00:03:04.501 LIB libspdk_sock.a 00:03:04.760 SO libspdk_sock.so.8.0 00:03:04.760 CC lib/thread/iobuf.o 00:03:04.760 CC lib/thread/thread.o 00:03:04.760 SYMLINK libspdk_sock.so 00:03:05.019 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:05.019 CC lib/nvme/nvme_ns_cmd.o 00:03:05.019 CC lib/nvme/nvme_ctrlr.o 00:03:05.019 CC lib/nvme/nvme_fabric.o 00:03:05.019 CC lib/nvme/nvme_qpair.o 00:03:05.019 CC lib/nvme/nvme_ns.o 00:03:05.019 CC lib/nvme/nvme_pcie_common.o 00:03:05.019 CC lib/nvme/nvme_pcie.o 00:03:05.019 CC lib/nvme/nvme.o 00:03:05.587 CC lib/nvme/nvme_quirks.o 00:03:05.587 CC lib/nvme/nvme_transport.o 00:03:05.587 CC lib/nvme/nvme_discovery.o 00:03:05.846 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:05.846 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:05.846 CC lib/nvme/nvme_tcp.o 00:03:06.105 CC lib/nvme/nvme_opal.o 00:03:06.105 CC lib/nvme/nvme_io_msg.o 00:03:06.364 CC lib/nvme/nvme_poll_group.o 00:03:06.364 LIB libspdk_thread.a 00:03:06.364 CC lib/nvme/nvme_zns.o 00:03:06.364 SO libspdk_thread.so.9.0 00:03:06.364 CC lib/nvme/nvme_cuse.o 00:03:06.364 SYMLINK libspdk_thread.so 00:03:06.364 CC lib/nvme/nvme_vfio_user.o 00:03:06.364 CC lib/nvme/nvme_rdma.o 00:03:06.623 CC lib/accel/accel.o 00:03:06.623 CC lib/blob/blobstore.o 00:03:06.623 CC lib/accel/accel_rpc.o 00:03:06.882 CC lib/accel/accel_sw.o 00:03:07.142 CC lib/init/json_config.o 00:03:07.142 CC lib/blob/request.o 00:03:07.142 CC lib/virtio/virtio.o 00:03:07.142 CC lib/init/subsystem.o 00:03:07.142 CC lib/vfu_tgt/tgt_endpoint.o 00:03:07.142 CC lib/vfu_tgt/tgt_rpc.o 00:03:07.401 CC lib/virtio/virtio_vhost_user.o 00:03:07.401 CC lib/virtio/virtio_vfio_user.o 00:03:07.401 CC lib/init/subsystem_rpc.o 00:03:07.401 CC lib/blob/zeroes.o 00:03:07.401 CC lib/blob/blob_bs_dev.o 00:03:07.401 CC lib/virtio/virtio_pci.o 00:03:07.401 LIB libspdk_vfu_tgt.a 00:03:07.401 CC lib/init/rpc.o 00:03:07.660 SO libspdk_vfu_tgt.so.2.0 00:03:07.660 SYMLINK libspdk_vfu_tgt.so 00:03:07.660 LIB libspdk_accel.a 00:03:07.660 LIB libspdk_init.a 00:03:07.660 SO libspdk_accel.so.14.0 00:03:07.660 SO libspdk_init.so.4.0 00:03:07.660 SYMLINK libspdk_accel.so 00:03:07.660 LIB libspdk_virtio.a 00:03:07.660 SYMLINK libspdk_init.so 00:03:07.660 LIB libspdk_nvme.a 00:03:07.919 SO libspdk_virtio.so.6.0 00:03:07.919 CC lib/bdev/bdev.o 00:03:07.919 CC lib/bdev/bdev_rpc.o 00:03:07.919 CC lib/bdev/part.o 00:03:07.919 CC lib/bdev/bdev_zone.o 00:03:07.919 CC lib/bdev/scsi_nvme.o 00:03:07.919 SYMLINK libspdk_virtio.so 00:03:07.919 CC lib/event/reactor.o 00:03:07.919 CC lib/event/log_rpc.o 00:03:07.919 CC lib/event/app.o 00:03:07.919 SO libspdk_nvme.so.12.0 00:03:08.178 CC lib/event/app_rpc.o 00:03:08.178 CC lib/event/scheduler_static.o 00:03:08.178 SYMLINK libspdk_nvme.so 00:03:08.437 LIB libspdk_event.a 00:03:08.437 SO libspdk_event.so.12.0 00:03:08.437 SYMLINK libspdk_event.so 00:03:09.374 LIB libspdk_blob.a 00:03:09.374 SO libspdk_blob.so.10.1 00:03:09.633 SYMLINK libspdk_blob.so 00:03:09.633 CC lib/blobfs/blobfs.o 00:03:09.633 CC lib/blobfs/tree.o 00:03:09.633 CC lib/lvol/lvol.o 00:03:10.568 LIB libspdk_bdev.a 00:03:10.568 SO libspdk_bdev.so.14.0 00:03:10.568 LIB libspdk_blobfs.a 00:03:10.568 SYMLINK libspdk_bdev.so 00:03:10.568 SO libspdk_blobfs.so.9.0 00:03:10.568 LIB libspdk_lvol.a 00:03:10.568 SYMLINK libspdk_blobfs.so 00:03:10.568 SO libspdk_lvol.so.9.1 00:03:10.568 CC lib/scsi/dev.o 00:03:10.568 CC lib/ublk/ublk.o 00:03:10.568 CC lib/scsi/lun.o 00:03:10.568 CC lib/scsi/port.o 00:03:10.568 CC lib/scsi/scsi.o 00:03:10.568 CC lib/ublk/ublk_rpc.o 00:03:10.568 CC lib/nvmf/ctrlr.o 00:03:10.827 CC lib/ftl/ftl_core.o 00:03:10.827 CC lib/nbd/nbd.o 00:03:10.827 SYMLINK libspdk_lvol.so 00:03:10.827 CC lib/nbd/nbd_rpc.o 00:03:10.827 CC lib/scsi/scsi_bdev.o 00:03:10.827 CC lib/scsi/scsi_pr.o 00:03:10.827 CC lib/nvmf/ctrlr_discovery.o 00:03:10.827 CC lib/nvmf/ctrlr_bdev.o 00:03:10.827 CC lib/scsi/scsi_rpc.o 00:03:11.085 CC lib/scsi/task.o 00:03:11.085 CC lib/nvmf/subsystem.o 00:03:11.085 CC lib/ftl/ftl_init.o 00:03:11.085 LIB libspdk_nbd.a 00:03:11.085 SO libspdk_nbd.so.6.0 00:03:11.085 SYMLINK libspdk_nbd.so 00:03:11.085 CC lib/nvmf/nvmf.o 00:03:11.085 CC lib/ftl/ftl_layout.o 00:03:11.343 CC lib/nvmf/nvmf_rpc.o 00:03:11.343 CC lib/ftl/ftl_debug.o 00:03:11.343 LIB libspdk_ublk.a 00:03:11.343 SO libspdk_ublk.so.2.0 00:03:11.343 LIB libspdk_scsi.a 00:03:11.343 CC lib/nvmf/transport.o 00:03:11.343 SO libspdk_scsi.so.8.0 00:03:11.343 SYMLINK libspdk_ublk.so 00:03:11.344 CC lib/ftl/ftl_io.o 00:03:11.602 SYMLINK libspdk_scsi.so 00:03:11.602 CC lib/ftl/ftl_sb.o 00:03:11.602 CC lib/ftl/ftl_l2p.o 00:03:11.602 CC lib/ftl/ftl_l2p_flat.o 00:03:11.602 CC lib/nvmf/tcp.o 00:03:11.602 CC lib/ftl/ftl_nv_cache.o 00:03:11.602 CC lib/ftl/ftl_band.o 00:03:11.861 CC lib/vhost/vhost.o 00:03:11.861 CC lib/iscsi/conn.o 00:03:12.119 CC lib/vhost/vhost_rpc.o 00:03:12.119 CC lib/vhost/vhost_scsi.o 00:03:12.120 CC lib/ftl/ftl_band_ops.o 00:03:12.120 CC lib/ftl/ftl_writer.o 00:03:12.120 CC lib/nvmf/vfio_user.o 00:03:12.378 CC lib/iscsi/init_grp.o 00:03:12.378 CC lib/iscsi/iscsi.o 00:03:12.378 CC lib/iscsi/md5.o 00:03:12.378 CC lib/vhost/vhost_blk.o 00:03:12.636 CC lib/nvmf/rdma.o 00:03:12.636 CC lib/ftl/ftl_rq.o 00:03:12.636 CC lib/iscsi/param.o 00:03:12.636 CC lib/iscsi/portal_grp.o 00:03:12.636 CC lib/vhost/rte_vhost_user.o 00:03:12.916 CC lib/ftl/ftl_reloc.o 00:03:12.916 CC lib/ftl/ftl_l2p_cache.o 00:03:12.916 CC lib/iscsi/tgt_node.o 00:03:12.916 CC lib/iscsi/iscsi_subsystem.o 00:03:13.225 CC lib/iscsi/iscsi_rpc.o 00:03:13.225 CC lib/iscsi/task.o 00:03:13.225 CC lib/ftl/ftl_p2l.o 00:03:13.483 CC lib/ftl/mngt/ftl_mngt.o 00:03:13.483 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:13.483 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:13.483 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:13.483 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:13.483 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:13.741 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:13.741 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:13.741 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:13.741 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:13.741 LIB libspdk_iscsi.a 00:03:13.741 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:13.741 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:13.741 LIB libspdk_vhost.a 00:03:13.741 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:13.741 SO libspdk_iscsi.so.7.0 00:03:13.741 CC lib/ftl/utils/ftl_conf.o 00:03:13.741 SO libspdk_vhost.so.7.1 00:03:13.999 CC lib/ftl/utils/ftl_md.o 00:03:13.999 CC lib/ftl/utils/ftl_mempool.o 00:03:13.999 CC lib/ftl/utils/ftl_bitmap.o 00:03:13.999 SYMLINK libspdk_vhost.so 00:03:13.999 CC lib/ftl/utils/ftl_property.o 00:03:13.999 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:13.999 SYMLINK libspdk_iscsi.so 00:03:13.999 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:13.999 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:13.999 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:13.999 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:13.999 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:14.257 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:14.257 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:14.257 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:14.257 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:14.257 CC lib/ftl/base/ftl_base_dev.o 00:03:14.257 CC lib/ftl/base/ftl_base_bdev.o 00:03:14.257 CC lib/ftl/ftl_trace.o 00:03:14.516 LIB libspdk_ftl.a 00:03:14.516 LIB libspdk_nvmf.a 00:03:14.775 SO libspdk_nvmf.so.17.0 00:03:14.775 SO libspdk_ftl.so.8.0 00:03:14.775 SYMLINK libspdk_nvmf.so 00:03:15.034 SYMLINK libspdk_ftl.so 00:03:15.293 CC module/env_dpdk/env_dpdk_rpc.o 00:03:15.293 CC module/vfu_device/vfu_virtio.o 00:03:15.293 CC module/accel/iaa/accel_iaa.o 00:03:15.293 CC module/accel/dsa/accel_dsa.o 00:03:15.293 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:15.293 CC module/accel/error/accel_error.o 00:03:15.293 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:15.293 CC module/blob/bdev/blob_bdev.o 00:03:15.293 CC module/accel/ioat/accel_ioat.o 00:03:15.293 CC module/sock/posix/posix.o 00:03:15.293 LIB libspdk_env_dpdk_rpc.a 00:03:15.552 SO libspdk_env_dpdk_rpc.so.5.0 00:03:15.552 LIB libspdk_scheduler_dpdk_governor.a 00:03:15.552 SYMLINK libspdk_env_dpdk_rpc.so 00:03:15.552 CC module/accel/error/accel_error_rpc.o 00:03:15.552 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:15.552 CC module/accel/iaa/accel_iaa_rpc.o 00:03:15.552 CC module/accel/ioat/accel_ioat_rpc.o 00:03:15.552 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:15.552 CC module/accel/dsa/accel_dsa_rpc.o 00:03:15.552 CC module/vfu_device/vfu_virtio_blk.o 00:03:15.552 LIB libspdk_scheduler_dynamic.a 00:03:15.552 CC module/vfu_device/vfu_virtio_scsi.o 00:03:15.552 SO libspdk_scheduler_dynamic.so.3.0 00:03:15.552 LIB libspdk_blob_bdev.a 00:03:15.552 SO libspdk_blob_bdev.so.10.1 00:03:15.552 LIB libspdk_accel_error.a 00:03:15.552 LIB libspdk_accel_iaa.a 00:03:15.552 SYMLINK libspdk_scheduler_dynamic.so 00:03:15.552 SO libspdk_accel_error.so.1.0 00:03:15.810 SO libspdk_accel_iaa.so.2.0 00:03:15.810 LIB libspdk_accel_ioat.a 00:03:15.810 SYMLINK libspdk_blob_bdev.so 00:03:15.810 LIB libspdk_accel_dsa.a 00:03:15.810 SO libspdk_accel_ioat.so.5.0 00:03:15.810 SYMLINK libspdk_accel_error.so 00:03:15.810 SO libspdk_accel_dsa.so.4.0 00:03:15.810 SYMLINK libspdk_accel_iaa.so 00:03:15.810 CC module/scheduler/gscheduler/gscheduler.o 00:03:15.810 SYMLINK libspdk_accel_dsa.so 00:03:15.810 SYMLINK libspdk_accel_ioat.so 00:03:15.810 CC module/vfu_device/vfu_virtio_rpc.o 00:03:15.810 CC module/sock/uring/uring.o 00:03:15.810 CC module/bdev/delay/vbdev_delay.o 00:03:15.810 CC module/bdev/error/vbdev_error.o 00:03:16.069 CC module/blobfs/bdev/blobfs_bdev.o 00:03:16.069 LIB libspdk_scheduler_gscheduler.a 00:03:16.069 SO libspdk_scheduler_gscheduler.so.3.0 00:03:16.069 CC module/bdev/gpt/gpt.o 00:03:16.069 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:16.069 LIB libspdk_vfu_device.a 00:03:16.069 CC module/bdev/lvol/vbdev_lvol.o 00:03:16.069 SYMLINK libspdk_scheduler_gscheduler.so 00:03:16.069 SO libspdk_vfu_device.so.2.0 00:03:16.069 LIB libspdk_sock_posix.a 00:03:16.069 SO libspdk_sock_posix.so.5.0 00:03:16.069 SYMLINK libspdk_vfu_device.so 00:03:16.069 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:16.069 CC module/bdev/malloc/bdev_malloc.o 00:03:16.069 LIB libspdk_blobfs_bdev.a 00:03:16.069 CC module/bdev/gpt/vbdev_gpt.o 00:03:16.069 SYMLINK libspdk_sock_posix.so 00:03:16.069 SO libspdk_blobfs_bdev.so.5.0 00:03:16.328 CC module/bdev/error/vbdev_error_rpc.o 00:03:16.328 CC module/bdev/null/bdev_null.o 00:03:16.328 SYMLINK libspdk_blobfs_bdev.so 00:03:16.328 CC module/bdev/nvme/bdev_nvme.o 00:03:16.328 LIB libspdk_bdev_delay.a 00:03:16.328 SO libspdk_bdev_delay.so.5.0 00:03:16.328 CC module/bdev/passthru/vbdev_passthru.o 00:03:16.328 LIB libspdk_bdev_error.a 00:03:16.328 CC module/bdev/raid/bdev_raid.o 00:03:16.328 SO libspdk_bdev_error.so.5.0 00:03:16.328 SYMLINK libspdk_bdev_delay.so 00:03:16.328 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:16.328 LIB libspdk_bdev_gpt.a 00:03:16.587 SYMLINK libspdk_bdev_error.so 00:03:16.587 CC module/bdev/nvme/nvme_rpc.o 00:03:16.587 SO libspdk_bdev_gpt.so.5.0 00:03:16.587 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:16.587 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:16.587 CC module/bdev/null/bdev_null_rpc.o 00:03:16.587 LIB libspdk_sock_uring.a 00:03:16.587 SYMLINK libspdk_bdev_gpt.so 00:03:16.587 SO libspdk_sock_uring.so.4.0 00:03:16.587 SYMLINK libspdk_sock_uring.so 00:03:16.587 CC module/bdev/split/vbdev_split.o 00:03:16.587 LIB libspdk_bdev_malloc.a 00:03:16.587 LIB libspdk_bdev_null.a 00:03:16.587 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:16.587 SO libspdk_bdev_null.so.5.0 00:03:16.846 SO libspdk_bdev_malloc.so.5.0 00:03:16.846 CC module/bdev/nvme/bdev_mdns_client.o 00:03:16.846 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:16.846 SYMLINK libspdk_bdev_null.so 00:03:16.846 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:16.846 SYMLINK libspdk_bdev_malloc.so 00:03:16.846 LIB libspdk_bdev_lvol.a 00:03:16.846 SO libspdk_bdev_lvol.so.5.0 00:03:16.846 LIB libspdk_bdev_passthru.a 00:03:16.846 SO libspdk_bdev_passthru.so.5.0 00:03:16.846 CC module/bdev/split/vbdev_split_rpc.o 00:03:16.846 SYMLINK libspdk_bdev_lvol.so 00:03:16.846 CC module/bdev/nvme/vbdev_opal.o 00:03:16.846 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:17.105 CC module/bdev/uring/bdev_uring.o 00:03:17.105 SYMLINK libspdk_bdev_passthru.so 00:03:17.105 CC module/bdev/aio/bdev_aio.o 00:03:17.105 LIB libspdk_bdev_split.a 00:03:17.105 CC module/bdev/ftl/bdev_ftl.o 00:03:17.105 LIB libspdk_bdev_zone_block.a 00:03:17.105 SO libspdk_bdev_split.so.5.0 00:03:17.105 SO libspdk_bdev_zone_block.so.5.0 00:03:17.105 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:17.105 CC module/bdev/iscsi/bdev_iscsi.o 00:03:17.105 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:17.105 SYMLINK libspdk_bdev_split.so 00:03:17.105 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:17.105 SYMLINK libspdk_bdev_zone_block.so 00:03:17.364 CC module/bdev/raid/bdev_raid_rpc.o 00:03:17.364 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:17.364 CC module/bdev/raid/bdev_raid_sb.o 00:03:17.364 CC module/bdev/uring/bdev_uring_rpc.o 00:03:17.364 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:17.364 CC module/bdev/aio/bdev_aio_rpc.o 00:03:17.364 CC module/bdev/raid/raid0.o 00:03:17.364 LIB libspdk_bdev_ftl.a 00:03:17.364 SO libspdk_bdev_ftl.so.5.0 00:03:17.623 CC module/bdev/raid/raid1.o 00:03:17.623 LIB libspdk_bdev_uring.a 00:03:17.623 SYMLINK libspdk_bdev_ftl.so 00:03:17.623 CC module/bdev/raid/concat.o 00:03:17.623 LIB libspdk_bdev_iscsi.a 00:03:17.623 LIB libspdk_bdev_aio.a 00:03:17.623 SO libspdk_bdev_uring.so.5.0 00:03:17.623 SO libspdk_bdev_iscsi.so.5.0 00:03:17.623 SO libspdk_bdev_aio.so.5.0 00:03:17.623 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:17.623 SYMLINK libspdk_bdev_iscsi.so 00:03:17.623 SYMLINK libspdk_bdev_uring.so 00:03:17.623 SYMLINK libspdk_bdev_aio.so 00:03:17.881 LIB libspdk_bdev_raid.a 00:03:17.881 SO libspdk_bdev_raid.so.5.0 00:03:17.881 LIB libspdk_bdev_virtio.a 00:03:17.881 SO libspdk_bdev_virtio.so.5.0 00:03:17.882 SYMLINK libspdk_bdev_raid.so 00:03:17.882 SYMLINK libspdk_bdev_virtio.so 00:03:18.448 LIB libspdk_bdev_nvme.a 00:03:18.448 SO libspdk_bdev_nvme.so.6.0 00:03:18.707 SYMLINK libspdk_bdev_nvme.so 00:03:18.966 CC module/event/subsystems/vmd/vmd.o 00:03:18.966 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:18.966 CC module/event/subsystems/scheduler/scheduler.o 00:03:18.966 CC module/event/subsystems/iobuf/iobuf.o 00:03:18.966 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:18.966 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:18.966 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:18.966 CC module/event/subsystems/sock/sock.o 00:03:18.966 LIB libspdk_event_sock.a 00:03:19.225 LIB libspdk_event_iobuf.a 00:03:19.225 LIB libspdk_event_vfu_tgt.a 00:03:19.225 LIB libspdk_event_vhost_blk.a 00:03:19.225 SO libspdk_event_sock.so.4.0 00:03:19.225 LIB libspdk_event_scheduler.a 00:03:19.225 LIB libspdk_event_vmd.a 00:03:19.225 SO libspdk_event_iobuf.so.2.0 00:03:19.225 SO libspdk_event_vhost_blk.so.2.0 00:03:19.225 SO libspdk_event_scheduler.so.3.0 00:03:19.225 SO libspdk_event_vfu_tgt.so.2.0 00:03:19.225 SO libspdk_event_vmd.so.5.0 00:03:19.225 SYMLINK libspdk_event_sock.so 00:03:19.225 SYMLINK libspdk_event_vhost_blk.so 00:03:19.225 SYMLINK libspdk_event_iobuf.so 00:03:19.225 SYMLINK libspdk_event_scheduler.so 00:03:19.225 SYMLINK libspdk_event_vfu_tgt.so 00:03:19.225 SYMLINK libspdk_event_vmd.so 00:03:19.484 CC module/event/subsystems/accel/accel.o 00:03:19.484 LIB libspdk_event_accel.a 00:03:19.484 SO libspdk_event_accel.so.5.0 00:03:19.484 SYMLINK libspdk_event_accel.so 00:03:19.743 CC module/event/subsystems/bdev/bdev.o 00:03:20.002 LIB libspdk_event_bdev.a 00:03:20.002 SO libspdk_event_bdev.so.5.0 00:03:20.002 SYMLINK libspdk_event_bdev.so 00:03:20.261 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:20.261 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:20.261 CC module/event/subsystems/ublk/ublk.o 00:03:20.261 CC module/event/subsystems/nbd/nbd.o 00:03:20.261 CC module/event/subsystems/scsi/scsi.o 00:03:20.261 LIB libspdk_event_nbd.a 00:03:20.261 LIB libspdk_event_ublk.a 00:03:20.261 LIB libspdk_event_scsi.a 00:03:20.261 SO libspdk_event_nbd.so.5.0 00:03:20.261 SO libspdk_event_ublk.so.2.0 00:03:20.261 SO libspdk_event_scsi.so.5.0 00:03:20.520 SYMLINK libspdk_event_ublk.so 00:03:20.520 SYMLINK libspdk_event_nbd.so 00:03:20.520 LIB libspdk_event_nvmf.a 00:03:20.520 SYMLINK libspdk_event_scsi.so 00:03:20.520 SO libspdk_event_nvmf.so.5.0 00:03:20.520 SYMLINK libspdk_event_nvmf.so 00:03:20.520 CC module/event/subsystems/iscsi/iscsi.o 00:03:20.520 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:20.779 LIB libspdk_event_vhost_scsi.a 00:03:20.779 SO libspdk_event_vhost_scsi.so.2.0 00:03:20.779 LIB libspdk_event_iscsi.a 00:03:20.779 SO libspdk_event_iscsi.so.5.0 00:03:20.779 SYMLINK libspdk_event_vhost_scsi.so 00:03:20.779 SYMLINK libspdk_event_iscsi.so 00:03:21.038 SO libspdk.so.5.0 00:03:21.038 SYMLINK libspdk.so 00:03:21.038 CXX app/trace/trace.o 00:03:21.038 CC app/trace_record/trace_record.o 00:03:21.298 CC app/nvmf_tgt/nvmf_main.o 00:03:21.298 CC examples/accel/perf/accel_perf.o 00:03:21.298 CC examples/bdev/hello_world/hello_bdev.o 00:03:21.298 CC examples/blob/hello_world/hello_blob.o 00:03:21.298 CC test/bdev/bdevio/bdevio.o 00:03:21.298 CC test/blobfs/mkfs/mkfs.o 00:03:21.298 CC test/app/bdev_svc/bdev_svc.o 00:03:21.298 CC test/accel/dif/dif.o 00:03:21.558 LINK nvmf_tgt 00:03:21.558 LINK spdk_trace_record 00:03:21.558 LINK mkfs 00:03:21.558 LINK bdev_svc 00:03:21.558 LINK hello_blob 00:03:21.558 LINK hello_bdev 00:03:21.558 LINK spdk_trace 00:03:21.558 LINK bdevio 00:03:21.817 LINK dif 00:03:21.817 LINK accel_perf 00:03:21.817 CC test/app/histogram_perf/histogram_perf.o 00:03:21.817 CC test/app/jsoncat/jsoncat.o 00:03:21.817 TEST_HEADER include/spdk/accel.h 00:03:21.817 TEST_HEADER include/spdk/accel_module.h 00:03:21.817 TEST_HEADER include/spdk/assert.h 00:03:21.817 TEST_HEADER include/spdk/barrier.h 00:03:21.817 TEST_HEADER include/spdk/base64.h 00:03:21.817 TEST_HEADER include/spdk/bdev.h 00:03:21.817 TEST_HEADER include/spdk/bdev_module.h 00:03:21.817 TEST_HEADER include/spdk/bdev_zone.h 00:03:21.817 TEST_HEADER include/spdk/bit_array.h 00:03:21.817 TEST_HEADER include/spdk/bit_pool.h 00:03:21.817 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:21.817 TEST_HEADER include/spdk/blob_bdev.h 00:03:21.817 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:21.817 TEST_HEADER include/spdk/blobfs.h 00:03:21.817 TEST_HEADER include/spdk/blob.h 00:03:21.817 TEST_HEADER include/spdk/conf.h 00:03:21.817 TEST_HEADER include/spdk/config.h 00:03:21.817 TEST_HEADER include/spdk/cpuset.h 00:03:21.817 TEST_HEADER include/spdk/crc16.h 00:03:21.817 TEST_HEADER include/spdk/crc32.h 00:03:21.817 TEST_HEADER include/spdk/crc64.h 00:03:21.817 TEST_HEADER include/spdk/dif.h 00:03:21.817 TEST_HEADER include/spdk/dma.h 00:03:21.817 TEST_HEADER include/spdk/endian.h 00:03:21.817 TEST_HEADER include/spdk/env_dpdk.h 00:03:21.817 CC examples/blob/cli/blobcli.o 00:03:21.817 TEST_HEADER include/spdk/env.h 00:03:21.817 TEST_HEADER include/spdk/event.h 00:03:21.817 TEST_HEADER include/spdk/fd_group.h 00:03:21.817 TEST_HEADER include/spdk/fd.h 00:03:21.817 TEST_HEADER include/spdk/file.h 00:03:21.817 TEST_HEADER include/spdk/ftl.h 00:03:21.817 TEST_HEADER include/spdk/gpt_spec.h 00:03:21.817 TEST_HEADER include/spdk/hexlify.h 00:03:21.817 TEST_HEADER include/spdk/histogram_data.h 00:03:21.817 TEST_HEADER include/spdk/idxd.h 00:03:21.817 TEST_HEADER include/spdk/idxd_spec.h 00:03:21.817 TEST_HEADER include/spdk/init.h 00:03:21.817 TEST_HEADER include/spdk/ioat.h 00:03:21.817 TEST_HEADER include/spdk/ioat_spec.h 00:03:21.817 TEST_HEADER include/spdk/iscsi_spec.h 00:03:21.817 TEST_HEADER include/spdk/json.h 00:03:21.817 TEST_HEADER include/spdk/jsonrpc.h 00:03:21.817 TEST_HEADER include/spdk/likely.h 00:03:21.817 TEST_HEADER include/spdk/log.h 00:03:21.817 TEST_HEADER include/spdk/lvol.h 00:03:21.817 TEST_HEADER include/spdk/memory.h 00:03:21.817 TEST_HEADER include/spdk/mmio.h 00:03:21.817 TEST_HEADER include/spdk/nbd.h 00:03:21.817 TEST_HEADER include/spdk/notify.h 00:03:21.817 TEST_HEADER include/spdk/nvme.h 00:03:21.817 TEST_HEADER include/spdk/nvme_intel.h 00:03:21.817 CC app/iscsi_tgt/iscsi_tgt.o 00:03:21.817 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:21.817 LINK histogram_perf 00:03:21.817 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:21.817 TEST_HEADER include/spdk/nvme_spec.h 00:03:21.817 CC examples/bdev/bdevperf/bdevperf.o 00:03:21.817 TEST_HEADER include/spdk/nvme_zns.h 00:03:21.817 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:21.817 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:21.817 TEST_HEADER include/spdk/nvmf.h 00:03:21.817 TEST_HEADER include/spdk/nvmf_spec.h 00:03:21.817 TEST_HEADER include/spdk/nvmf_transport.h 00:03:21.817 TEST_HEADER include/spdk/opal.h 00:03:21.817 TEST_HEADER include/spdk/opal_spec.h 00:03:21.817 LINK jsoncat 00:03:21.817 TEST_HEADER include/spdk/pci_ids.h 00:03:21.817 TEST_HEADER include/spdk/pipe.h 00:03:21.817 TEST_HEADER include/spdk/queue.h 00:03:21.817 TEST_HEADER include/spdk/reduce.h 00:03:21.817 TEST_HEADER include/spdk/rpc.h 00:03:21.817 TEST_HEADER include/spdk/scheduler.h 00:03:21.817 TEST_HEADER include/spdk/scsi.h 00:03:21.817 TEST_HEADER include/spdk/scsi_spec.h 00:03:21.817 TEST_HEADER include/spdk/sock.h 00:03:21.817 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:21.817 TEST_HEADER include/spdk/stdinc.h 00:03:21.817 TEST_HEADER include/spdk/string.h 00:03:21.817 TEST_HEADER include/spdk/thread.h 00:03:21.817 TEST_HEADER include/spdk/trace.h 00:03:21.817 TEST_HEADER include/spdk/trace_parser.h 00:03:21.817 TEST_HEADER include/spdk/tree.h 00:03:21.817 TEST_HEADER include/spdk/ublk.h 00:03:21.817 TEST_HEADER include/spdk/util.h 00:03:22.076 TEST_HEADER include/spdk/uuid.h 00:03:22.076 TEST_HEADER include/spdk/version.h 00:03:22.076 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:22.076 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:22.076 TEST_HEADER include/spdk/vhost.h 00:03:22.076 TEST_HEADER include/spdk/vmd.h 00:03:22.076 CC test/app/stub/stub.o 00:03:22.076 TEST_HEADER include/spdk/xor.h 00:03:22.076 TEST_HEADER include/spdk/zipf.h 00:03:22.076 CXX test/cpp_headers/accel.o 00:03:22.076 CC test/dma/test_dma/test_dma.o 00:03:22.076 LINK iscsi_tgt 00:03:22.076 CXX test/cpp_headers/accel_module.o 00:03:22.076 LINK stub 00:03:22.076 CC test/event/event_perf/event_perf.o 00:03:22.076 LINK nvme_fuzz 00:03:22.076 CC test/env/mem_callbacks/mem_callbacks.o 00:03:22.334 CXX test/cpp_headers/assert.o 00:03:22.334 LINK blobcli 00:03:22.334 LINK event_perf 00:03:22.334 CC app/spdk_lspci/spdk_lspci.o 00:03:22.334 CC app/spdk_nvme_perf/perf.o 00:03:22.334 CC app/spdk_tgt/spdk_tgt.o 00:03:22.334 LINK test_dma 00:03:22.334 CXX test/cpp_headers/barrier.o 00:03:22.593 LINK spdk_lspci 00:03:22.593 CC test/event/reactor/reactor.o 00:03:22.593 CC test/env/vtophys/vtophys.o 00:03:22.593 CXX test/cpp_headers/base64.o 00:03:22.593 LINK spdk_tgt 00:03:22.593 CXX test/cpp_headers/bdev.o 00:03:22.593 LINK bdevperf 00:03:22.593 CXX test/cpp_headers/bdev_module.o 00:03:22.593 LINK reactor 00:03:22.851 LINK vtophys 00:03:22.851 LINK mem_callbacks 00:03:22.851 CC app/spdk_nvme_identify/identify.o 00:03:22.852 CXX test/cpp_headers/bdev_zone.o 00:03:22.852 CC test/event/reactor_perf/reactor_perf.o 00:03:22.852 CC test/nvme/aer/aer.o 00:03:22.852 CC test/rpc_client/rpc_client_test.o 00:03:22.852 CC examples/ioat/perf/perf.o 00:03:23.110 CC test/lvol/esnap/esnap.o 00:03:23.110 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:23.110 LINK reactor_perf 00:03:23.110 CXX test/cpp_headers/bit_array.o 00:03:23.110 LINK rpc_client_test 00:03:23.110 LINK env_dpdk_post_init 00:03:23.110 LINK ioat_perf 00:03:23.110 CXX test/cpp_headers/bit_pool.o 00:03:23.110 CC test/event/app_repeat/app_repeat.o 00:03:23.368 LINK aer 00:03:23.368 LINK spdk_nvme_perf 00:03:23.368 CXX test/cpp_headers/blob_bdev.o 00:03:23.368 CC test/thread/poller_perf/poller_perf.o 00:03:23.368 LINK app_repeat 00:03:23.368 CC examples/ioat/verify/verify.o 00:03:23.368 CC test/env/memory/memory_ut.o 00:03:23.368 CC test/nvme/reset/reset.o 00:03:23.627 CC app/spdk_nvme_discover/discovery_aer.o 00:03:23.627 LINK poller_perf 00:03:23.627 LINK iscsi_fuzz 00:03:23.627 CXX test/cpp_headers/blobfs_bdev.o 00:03:23.627 LINK spdk_nvme_identify 00:03:23.627 LINK verify 00:03:23.627 CC test/event/scheduler/scheduler.o 00:03:23.627 LINK spdk_nvme_discover 00:03:23.885 LINK reset 00:03:23.885 CXX test/cpp_headers/blobfs.o 00:03:23.885 CC examples/nvme/hello_world/hello_world.o 00:03:23.885 CC examples/nvme/reconnect/reconnect.o 00:03:23.885 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:23.885 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:23.885 CC app/spdk_top/spdk_top.o 00:03:23.885 LINK scheduler 00:03:23.885 CC test/nvme/sgl/sgl.o 00:03:23.885 CXX test/cpp_headers/blob.o 00:03:23.885 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:24.144 LINK hello_world 00:03:24.144 CXX test/cpp_headers/conf.o 00:03:24.144 LINK reconnect 00:03:24.144 CC test/nvme/e2edp/nvme_dp.o 00:03:24.144 LINK sgl 00:03:24.402 CC examples/nvme/arbitration/arbitration.o 00:03:24.402 CXX test/cpp_headers/config.o 00:03:24.402 CXX test/cpp_headers/cpuset.o 00:03:24.402 LINK nvme_manage 00:03:24.402 LINK memory_ut 00:03:24.402 CC examples/nvme/hotplug/hotplug.o 00:03:24.402 LINK vhost_fuzz 00:03:24.402 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:24.402 LINK nvme_dp 00:03:24.402 CXX test/cpp_headers/crc16.o 00:03:24.402 CXX test/cpp_headers/crc32.o 00:03:24.661 CXX test/cpp_headers/crc64.o 00:03:24.661 CC test/env/pci/pci_ut.o 00:03:24.661 LINK arbitration 00:03:24.661 LINK cmb_copy 00:03:24.661 LINK hotplug 00:03:24.661 CC test/nvme/overhead/overhead.o 00:03:24.661 CC test/nvme/err_injection/err_injection.o 00:03:24.661 CC test/nvme/startup/startup.o 00:03:24.661 CXX test/cpp_headers/dif.o 00:03:24.661 CXX test/cpp_headers/dma.o 00:03:24.919 LINK spdk_top 00:03:24.919 CXX test/cpp_headers/endian.o 00:03:24.919 CC examples/nvme/abort/abort.o 00:03:24.919 LINK startup 00:03:24.919 CXX test/cpp_headers/env_dpdk.o 00:03:24.919 LINK err_injection 00:03:24.919 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:24.919 LINK pci_ut 00:03:24.919 LINK overhead 00:03:24.919 CC app/vhost/vhost.o 00:03:25.177 CC app/spdk_dd/spdk_dd.o 00:03:25.177 CXX test/cpp_headers/env.o 00:03:25.177 CC test/nvme/reserve/reserve.o 00:03:25.177 LINK pmr_persistence 00:03:25.177 CC app/fio/nvme/fio_plugin.o 00:03:25.177 LINK vhost 00:03:25.177 CXX test/cpp_headers/event.o 00:03:25.177 LINK abort 00:03:25.435 CC app/fio/bdev/fio_plugin.o 00:03:25.435 CXX test/cpp_headers/fd_group.o 00:03:25.435 LINK reserve 00:03:25.435 CC examples/sock/hello_world/hello_sock.o 00:03:25.435 CXX test/cpp_headers/fd.o 00:03:25.435 LINK spdk_dd 00:03:25.435 CC examples/vmd/lsvmd/lsvmd.o 00:03:25.693 CC examples/util/zipf/zipf.o 00:03:25.693 CC test/nvme/simple_copy/simple_copy.o 00:03:25.693 LINK hello_sock 00:03:25.693 CC examples/nvmf/nvmf/nvmf.o 00:03:25.693 CXX test/cpp_headers/file.o 00:03:25.693 CXX test/cpp_headers/ftl.o 00:03:25.693 LINK lsvmd 00:03:25.693 LINK zipf 00:03:25.693 CXX test/cpp_headers/gpt_spec.o 00:03:25.693 LINK spdk_nvme 00:03:25.693 LINK spdk_bdev 00:03:25.951 LINK simple_copy 00:03:25.951 CC test/nvme/connect_stress/connect_stress.o 00:03:25.951 CC examples/vmd/led/led.o 00:03:25.951 CXX test/cpp_headers/hexlify.o 00:03:25.951 LINK nvmf 00:03:25.951 CC test/nvme/boot_partition/boot_partition.o 00:03:25.951 CC examples/idxd/perf/perf.o 00:03:25.951 CC examples/thread/thread/thread_ex.o 00:03:25.951 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:25.951 CC test/nvme/compliance/nvme_compliance.o 00:03:25.951 LINK connect_stress 00:03:25.951 LINK led 00:03:26.210 CXX test/cpp_headers/histogram_data.o 00:03:26.210 LINK boot_partition 00:03:26.210 CC test/nvme/fused_ordering/fused_ordering.o 00:03:26.210 LINK interrupt_tgt 00:03:26.210 CXX test/cpp_headers/idxd.o 00:03:26.210 LINK thread 00:03:26.210 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:26.210 LINK idxd_perf 00:03:26.210 CC test/nvme/fdp/fdp.o 00:03:26.468 CXX test/cpp_headers/idxd_spec.o 00:03:26.468 CC test/nvme/cuse/cuse.o 00:03:26.468 LINK nvme_compliance 00:03:26.468 LINK fused_ordering 00:03:26.468 CXX test/cpp_headers/init.o 00:03:26.468 CXX test/cpp_headers/ioat.o 00:03:26.468 CXX test/cpp_headers/ioat_spec.o 00:03:26.468 LINK doorbell_aers 00:03:26.468 CXX test/cpp_headers/iscsi_spec.o 00:03:26.468 CXX test/cpp_headers/json.o 00:03:26.468 CXX test/cpp_headers/jsonrpc.o 00:03:26.468 CXX test/cpp_headers/likely.o 00:03:26.727 CXX test/cpp_headers/log.o 00:03:26.727 CXX test/cpp_headers/lvol.o 00:03:26.727 CXX test/cpp_headers/memory.o 00:03:26.727 LINK fdp 00:03:26.727 CXX test/cpp_headers/mmio.o 00:03:26.727 CXX test/cpp_headers/nbd.o 00:03:26.727 CXX test/cpp_headers/notify.o 00:03:26.727 CXX test/cpp_headers/nvme.o 00:03:26.727 CXX test/cpp_headers/nvme_intel.o 00:03:26.727 CXX test/cpp_headers/nvme_ocssd.o 00:03:26.727 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:26.727 CXX test/cpp_headers/nvme_spec.o 00:03:26.727 CXX test/cpp_headers/nvme_zns.o 00:03:26.727 CXX test/cpp_headers/nvmf_cmd.o 00:03:26.985 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:26.985 CXX test/cpp_headers/nvmf.o 00:03:26.985 CXX test/cpp_headers/nvmf_spec.o 00:03:26.985 CXX test/cpp_headers/nvmf_transport.o 00:03:26.985 CXX test/cpp_headers/opal.o 00:03:26.985 CXX test/cpp_headers/opal_spec.o 00:03:26.985 CXX test/cpp_headers/pci_ids.o 00:03:26.985 CXX test/cpp_headers/pipe.o 00:03:26.985 CXX test/cpp_headers/queue.o 00:03:26.985 CXX test/cpp_headers/reduce.o 00:03:26.985 CXX test/cpp_headers/rpc.o 00:03:26.985 CXX test/cpp_headers/scheduler.o 00:03:27.243 CXX test/cpp_headers/scsi.o 00:03:27.243 CXX test/cpp_headers/scsi_spec.o 00:03:27.243 CXX test/cpp_headers/sock.o 00:03:27.243 CXX test/cpp_headers/stdinc.o 00:03:27.243 CXX test/cpp_headers/string.o 00:03:27.243 CXX test/cpp_headers/thread.o 00:03:27.243 CXX test/cpp_headers/trace.o 00:03:27.243 CXX test/cpp_headers/trace_parser.o 00:03:27.243 CXX test/cpp_headers/tree.o 00:03:27.243 CXX test/cpp_headers/ublk.o 00:03:27.243 CXX test/cpp_headers/util.o 00:03:27.243 CXX test/cpp_headers/uuid.o 00:03:27.243 CXX test/cpp_headers/version.o 00:03:27.243 CXX test/cpp_headers/vfio_user_pci.o 00:03:27.502 CXX test/cpp_headers/vfio_user_spec.o 00:03:27.502 CXX test/cpp_headers/vhost.o 00:03:27.502 CXX test/cpp_headers/vmd.o 00:03:27.502 LINK cuse 00:03:27.502 CXX test/cpp_headers/xor.o 00:03:27.502 CXX test/cpp_headers/zipf.o 00:03:27.502 LINK esnap 00:03:28.070 00:03:28.070 real 0m59.610s 00:03:28.070 user 6m31.122s 00:03:28.070 sys 1m23.232s 00:03:28.070 06:33:41 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:28.070 06:33:41 -- common/autotest_common.sh@10 -- $ set +x 00:03:28.070 ************************************ 00:03:28.070 END TEST make 00:03:28.070 ************************************ 00:03:28.070 06:33:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:28.070 06:33:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:28.070 06:33:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:28.070 06:33:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:28.070 06:33:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:28.070 06:33:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:28.070 06:33:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:28.070 06:33:41 -- scripts/common.sh@335 -- # IFS=.-: 00:03:28.070 06:33:41 -- scripts/common.sh@335 -- # read -ra ver1 00:03:28.070 06:33:41 -- scripts/common.sh@336 -- # IFS=.-: 00:03:28.070 06:33:41 -- scripts/common.sh@336 -- # read -ra ver2 00:03:28.070 06:33:41 -- scripts/common.sh@337 -- # local 'op=<' 00:03:28.070 06:33:41 -- scripts/common.sh@339 -- # ver1_l=2 00:03:28.070 06:33:41 -- scripts/common.sh@340 -- # ver2_l=1 00:03:28.070 06:33:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:28.070 06:33:41 -- scripts/common.sh@343 -- # case "$op" in 00:03:28.070 06:33:41 -- scripts/common.sh@344 -- # : 1 00:03:28.070 06:33:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:28.070 06:33:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:28.070 06:33:41 -- scripts/common.sh@364 -- # decimal 1 00:03:28.070 06:33:41 -- scripts/common.sh@352 -- # local d=1 00:03:28.070 06:33:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:28.070 06:33:41 -- scripts/common.sh@354 -- # echo 1 00:03:28.070 06:33:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:28.070 06:33:41 -- scripts/common.sh@365 -- # decimal 2 00:03:28.070 06:33:41 -- scripts/common.sh@352 -- # local d=2 00:03:28.070 06:33:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:28.070 06:33:41 -- scripts/common.sh@354 -- # echo 2 00:03:28.070 06:33:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:28.070 06:33:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:28.070 06:33:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:28.071 06:33:41 -- scripts/common.sh@367 -- # return 0 00:03:28.071 06:33:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:28.071 06:33:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:28.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.071 --rc genhtml_branch_coverage=1 00:03:28.071 --rc genhtml_function_coverage=1 00:03:28.071 --rc genhtml_legend=1 00:03:28.071 --rc geninfo_all_blocks=1 00:03:28.071 --rc geninfo_unexecuted_blocks=1 00:03:28.071 00:03:28.071 ' 00:03:28.071 06:33:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:28.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.071 --rc genhtml_branch_coverage=1 00:03:28.071 --rc genhtml_function_coverage=1 00:03:28.071 --rc genhtml_legend=1 00:03:28.071 --rc geninfo_all_blocks=1 00:03:28.071 --rc geninfo_unexecuted_blocks=1 00:03:28.071 00:03:28.071 ' 00:03:28.071 06:33:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:28.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.071 --rc genhtml_branch_coverage=1 00:03:28.071 --rc genhtml_function_coverage=1 00:03:28.071 --rc genhtml_legend=1 00:03:28.071 --rc geninfo_all_blocks=1 00:03:28.071 --rc geninfo_unexecuted_blocks=1 00:03:28.071 00:03:28.071 ' 00:03:28.071 06:33:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:28.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:28.071 --rc genhtml_branch_coverage=1 00:03:28.071 --rc genhtml_function_coverage=1 00:03:28.071 --rc genhtml_legend=1 00:03:28.071 --rc geninfo_all_blocks=1 00:03:28.071 --rc geninfo_unexecuted_blocks=1 00:03:28.071 00:03:28.071 ' 00:03:28.071 06:33:41 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:28.071 06:33:41 -- nvmf/common.sh@7 -- # uname -s 00:03:28.071 06:33:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:28.071 06:33:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:28.071 06:33:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:28.071 06:33:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:28.071 06:33:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:28.071 06:33:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:28.071 06:33:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:28.071 06:33:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:28.071 06:33:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:28.071 06:33:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:28.071 06:33:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 00:03:28.071 06:33:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=1897a557-42a7-4044-982a-fbab8b2b3e32 00:03:28.071 06:33:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:28.071 06:33:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:28.071 06:33:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:28.071 06:33:41 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:28.071 06:33:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:28.071 06:33:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:28.071 06:33:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:28.071 06:33:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.071 06:33:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.071 06:33:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.071 06:33:41 -- paths/export.sh@5 -- # export PATH 00:03:28.071 06:33:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:28.071 06:33:41 -- nvmf/common.sh@46 -- # : 0 00:03:28.071 06:33:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:28.071 06:33:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:28.071 06:33:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:28.071 06:33:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:28.071 06:33:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:28.071 06:33:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:28.071 06:33:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:28.071 06:33:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:28.071 06:33:41 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:28.071 06:33:41 -- spdk/autotest.sh@32 -- # uname -s 00:03:28.071 06:33:41 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:28.071 06:33:41 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:28.071 06:33:41 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:28.071 06:33:42 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:28.071 06:33:42 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:28.071 06:33:42 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:28.071 06:33:42 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:28.071 06:33:42 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:28.071 06:33:42 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:28.071 06:33:42 -- spdk/autotest.sh@48 -- # udevadm_pid=48018 00:03:28.071 06:33:42 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:28.330 06:33:42 -- spdk/autotest.sh@54 -- # echo 48053 00:03:28.330 06:33:42 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:28.330 06:33:42 -- spdk/autotest.sh@56 -- # echo 48054 00:03:28.330 06:33:42 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:28.330 06:33:42 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:28.330 06:33:42 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:28.330 06:33:42 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:28.330 06:33:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:28.330 06:33:42 -- common/autotest_common.sh@10 -- # set +x 00:03:28.330 06:33:42 -- spdk/autotest.sh@70 -- # create_test_list 00:03:28.330 06:33:42 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:28.330 06:33:42 -- common/autotest_common.sh@10 -- # set +x 00:03:28.330 06:33:42 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:28.330 06:33:42 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:28.330 06:33:42 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:28.330 06:33:42 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:28.330 06:33:42 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:28.330 06:33:42 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:28.330 06:33:42 -- common/autotest_common.sh@1450 -- # uname 00:03:28.330 06:33:42 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:03:28.330 06:33:42 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:28.330 06:33:42 -- common/autotest_common.sh@1470 -- # uname 00:03:28.330 06:33:42 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:03:28.330 06:33:42 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:03:28.330 06:33:42 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:28.330 lcov: LCOV version 1.15 00:03:28.330 06:33:42 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:36.479 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:36.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:36.479 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:36.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:36.479 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:36.479 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:54.565 06:34:08 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:03:54.565 06:34:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:54.565 06:34:08 -- common/autotest_common.sh@10 -- # set +x 00:03:54.565 06:34:08 -- spdk/autotest.sh@89 -- # rm -f 00:03:54.565 06:34:08 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:55.134 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:55.394 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:03:55.394 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:03:55.394 06:34:09 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:03:55.394 06:34:09 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:55.394 06:34:09 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:55.394 06:34:09 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:55.394 06:34:09 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:55.394 06:34:09 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:55.394 06:34:09 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:55.394 06:34:09 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:55.394 06:34:09 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:55.394 06:34:09 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:55.394 06:34:09 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:55.394 06:34:09 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:55.394 06:34:09 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:55.394 06:34:09 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:55.394 06:34:09 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:55.394 06:34:09 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:03:55.394 06:34:09 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:03:55.394 06:34:09 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:55.394 06:34:09 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:55.394 06:34:09 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:55.394 06:34:09 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:03:55.394 06:34:09 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:03:55.394 06:34:09 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:55.394 06:34:09 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:55.394 06:34:09 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:03:55.394 06:34:09 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:03:55.394 06:34:09 -- spdk/autotest.sh@108 -- # grep -v p 00:03:55.394 06:34:09 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:55.394 06:34:09 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:55.394 06:34:09 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:03:55.394 06:34:09 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:55.394 06:34:09 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:55.394 No valid GPT data, bailing 00:03:55.394 06:34:09 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:55.394 06:34:09 -- scripts/common.sh@393 -- # pt= 00:03:55.394 06:34:09 -- scripts/common.sh@394 -- # return 1 00:03:55.394 06:34:09 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:55.394 1+0 records in 00:03:55.394 1+0 records out 00:03:55.394 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00376889 s, 278 MB/s 00:03:55.394 06:34:09 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:55.394 06:34:09 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:55.394 06:34:09 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:03:55.394 06:34:09 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:03:55.394 06:34:09 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:55.394 No valid GPT data, bailing 00:03:55.394 06:34:09 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:55.394 06:34:09 -- scripts/common.sh@393 -- # pt= 00:03:55.394 06:34:09 -- scripts/common.sh@394 -- # return 1 00:03:55.394 06:34:09 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:55.394 1+0 records in 00:03:55.394 1+0 records out 00:03:55.394 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00403015 s, 260 MB/s 00:03:55.394 06:34:09 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:55.394 06:34:09 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:55.394 06:34:09 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:03:55.394 06:34:09 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:03:55.394 06:34:09 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:55.654 No valid GPT data, bailing 00:03:55.654 06:34:09 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:55.654 06:34:09 -- scripts/common.sh@393 -- # pt= 00:03:55.654 06:34:09 -- scripts/common.sh@394 -- # return 1 00:03:55.654 06:34:09 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:55.654 1+0 records in 00:03:55.654 1+0 records out 00:03:55.654 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00418352 s, 251 MB/s 00:03:55.654 06:34:09 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:55.654 06:34:09 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:55.654 06:34:09 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:03:55.654 06:34:09 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:03:55.654 06:34:09 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:55.654 No valid GPT data, bailing 00:03:55.654 06:34:09 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:55.654 06:34:09 -- scripts/common.sh@393 -- # pt= 00:03:55.654 06:34:09 -- scripts/common.sh@394 -- # return 1 00:03:55.654 06:34:09 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:55.654 1+0 records in 00:03:55.654 1+0 records out 00:03:55.654 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00403793 s, 260 MB/s 00:03:55.654 06:34:09 -- spdk/autotest.sh@116 -- # sync 00:03:55.913 06:34:09 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:55.913 06:34:09 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:55.913 06:34:09 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:57.818 06:34:11 -- spdk/autotest.sh@122 -- # uname -s 00:03:57.818 06:34:11 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:03:57.818 06:34:11 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:57.818 06:34:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:57.818 06:34:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:57.818 06:34:11 -- common/autotest_common.sh@10 -- # set +x 00:03:57.818 ************************************ 00:03:57.818 START TEST setup.sh 00:03:57.818 ************************************ 00:03:57.818 06:34:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:57.818 * Looking for test storage... 00:03:57.818 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:57.818 06:34:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:57.818 06:34:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:57.818 06:34:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:58.078 06:34:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:58.078 06:34:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:58.078 06:34:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:58.078 06:34:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:58.078 06:34:11 -- scripts/common.sh@335 -- # IFS=.-: 00:03:58.078 06:34:11 -- scripts/common.sh@335 -- # read -ra ver1 00:03:58.078 06:34:11 -- scripts/common.sh@336 -- # IFS=.-: 00:03:58.078 06:34:11 -- scripts/common.sh@336 -- # read -ra ver2 00:03:58.078 06:34:11 -- scripts/common.sh@337 -- # local 'op=<' 00:03:58.078 06:34:11 -- scripts/common.sh@339 -- # ver1_l=2 00:03:58.078 06:34:11 -- scripts/common.sh@340 -- # ver2_l=1 00:03:58.078 06:34:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:58.078 06:34:11 -- scripts/common.sh@343 -- # case "$op" in 00:03:58.078 06:34:11 -- scripts/common.sh@344 -- # : 1 00:03:58.078 06:34:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:58.078 06:34:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:58.078 06:34:11 -- scripts/common.sh@364 -- # decimal 1 00:03:58.078 06:34:11 -- scripts/common.sh@352 -- # local d=1 00:03:58.078 06:34:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:58.078 06:34:11 -- scripts/common.sh@354 -- # echo 1 00:03:58.078 06:34:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:58.078 06:34:11 -- scripts/common.sh@365 -- # decimal 2 00:03:58.078 06:34:11 -- scripts/common.sh@352 -- # local d=2 00:03:58.078 06:34:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:58.078 06:34:11 -- scripts/common.sh@354 -- # echo 2 00:03:58.078 06:34:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:58.078 06:34:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:58.078 06:34:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:58.078 06:34:11 -- scripts/common.sh@367 -- # return 0 00:03:58.078 06:34:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:58.078 06:34:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:58.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.078 --rc genhtml_branch_coverage=1 00:03:58.078 --rc genhtml_function_coverage=1 00:03:58.078 --rc genhtml_legend=1 00:03:58.078 --rc geninfo_all_blocks=1 00:03:58.078 --rc geninfo_unexecuted_blocks=1 00:03:58.078 00:03:58.078 ' 00:03:58.078 06:34:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:58.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.078 --rc genhtml_branch_coverage=1 00:03:58.078 --rc genhtml_function_coverage=1 00:03:58.078 --rc genhtml_legend=1 00:03:58.078 --rc geninfo_all_blocks=1 00:03:58.078 --rc geninfo_unexecuted_blocks=1 00:03:58.078 00:03:58.078 ' 00:03:58.078 06:34:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:58.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.078 --rc genhtml_branch_coverage=1 00:03:58.078 --rc genhtml_function_coverage=1 00:03:58.078 --rc genhtml_legend=1 00:03:58.078 --rc geninfo_all_blocks=1 00:03:58.078 --rc geninfo_unexecuted_blocks=1 00:03:58.078 00:03:58.078 ' 00:03:58.078 06:34:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:58.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.078 --rc genhtml_branch_coverage=1 00:03:58.078 --rc genhtml_function_coverage=1 00:03:58.078 --rc genhtml_legend=1 00:03:58.078 --rc geninfo_all_blocks=1 00:03:58.078 --rc geninfo_unexecuted_blocks=1 00:03:58.078 00:03:58.078 ' 00:03:58.078 06:34:11 -- setup/test-setup.sh@10 -- # uname -s 00:03:58.078 06:34:11 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:58.078 06:34:11 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:58.078 06:34:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:58.078 06:34:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:58.078 06:34:11 -- common/autotest_common.sh@10 -- # set +x 00:03:58.078 ************************************ 00:03:58.078 START TEST acl 00:03:58.078 ************************************ 00:03:58.078 06:34:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:58.078 * Looking for test storage... 00:03:58.078 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:58.078 06:34:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:58.078 06:34:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:58.078 06:34:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:58.078 06:34:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:58.078 06:34:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:58.078 06:34:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:58.078 06:34:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:58.078 06:34:12 -- scripts/common.sh@335 -- # IFS=.-: 00:03:58.078 06:34:12 -- scripts/common.sh@335 -- # read -ra ver1 00:03:58.078 06:34:12 -- scripts/common.sh@336 -- # IFS=.-: 00:03:58.078 06:34:12 -- scripts/common.sh@336 -- # read -ra ver2 00:03:58.078 06:34:12 -- scripts/common.sh@337 -- # local 'op=<' 00:03:58.078 06:34:12 -- scripts/common.sh@339 -- # ver1_l=2 00:03:58.078 06:34:12 -- scripts/common.sh@340 -- # ver2_l=1 00:03:58.078 06:34:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:58.078 06:34:12 -- scripts/common.sh@343 -- # case "$op" in 00:03:58.078 06:34:12 -- scripts/common.sh@344 -- # : 1 00:03:58.078 06:34:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:58.078 06:34:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:58.078 06:34:12 -- scripts/common.sh@364 -- # decimal 1 00:03:58.078 06:34:12 -- scripts/common.sh@352 -- # local d=1 00:03:58.078 06:34:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:58.078 06:34:12 -- scripts/common.sh@354 -- # echo 1 00:03:58.078 06:34:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:58.078 06:34:12 -- scripts/common.sh@365 -- # decimal 2 00:03:58.078 06:34:12 -- scripts/common.sh@352 -- # local d=2 00:03:58.078 06:34:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:58.078 06:34:12 -- scripts/common.sh@354 -- # echo 2 00:03:58.078 06:34:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:58.078 06:34:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:58.078 06:34:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:58.078 06:34:12 -- scripts/common.sh@367 -- # return 0 00:03:58.078 06:34:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:58.078 06:34:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:58.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.078 --rc genhtml_branch_coverage=1 00:03:58.078 --rc genhtml_function_coverage=1 00:03:58.078 --rc genhtml_legend=1 00:03:58.078 --rc geninfo_all_blocks=1 00:03:58.078 --rc geninfo_unexecuted_blocks=1 00:03:58.078 00:03:58.078 ' 00:03:58.078 06:34:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:58.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.078 --rc genhtml_branch_coverage=1 00:03:58.078 --rc genhtml_function_coverage=1 00:03:58.078 --rc genhtml_legend=1 00:03:58.078 --rc geninfo_all_blocks=1 00:03:58.078 --rc geninfo_unexecuted_blocks=1 00:03:58.078 00:03:58.078 ' 00:03:58.078 06:34:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:58.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.078 --rc genhtml_branch_coverage=1 00:03:58.078 --rc genhtml_function_coverage=1 00:03:58.079 --rc genhtml_legend=1 00:03:58.079 --rc geninfo_all_blocks=1 00:03:58.079 --rc geninfo_unexecuted_blocks=1 00:03:58.079 00:03:58.079 ' 00:03:58.079 06:34:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:58.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.079 --rc genhtml_branch_coverage=1 00:03:58.079 --rc genhtml_function_coverage=1 00:03:58.079 --rc genhtml_legend=1 00:03:58.079 --rc geninfo_all_blocks=1 00:03:58.079 --rc geninfo_unexecuted_blocks=1 00:03:58.079 00:03:58.079 ' 00:03:58.079 06:34:12 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:58.079 06:34:12 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:58.079 06:34:12 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:58.079 06:34:12 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:58.079 06:34:12 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:58.079 06:34:12 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:58.079 06:34:12 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:58.079 06:34:12 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:58.079 06:34:12 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:58.079 06:34:12 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:58.079 06:34:12 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:58.079 06:34:12 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:58.079 06:34:12 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:58.079 06:34:12 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:58.079 06:34:12 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:58.079 06:34:12 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:03:58.079 06:34:12 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:03:58.079 06:34:12 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:58.079 06:34:12 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:58.079 06:34:12 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:58.079 06:34:12 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:03:58.079 06:34:12 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:03:58.079 06:34:12 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:58.079 06:34:12 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:58.079 06:34:12 -- setup/acl.sh@12 -- # devs=() 00:03:58.079 06:34:12 -- setup/acl.sh@12 -- # declare -a devs 00:03:58.079 06:34:12 -- setup/acl.sh@13 -- # drivers=() 00:03:58.079 06:34:12 -- setup/acl.sh@13 -- # declare -A drivers 00:03:58.079 06:34:12 -- setup/acl.sh@51 -- # setup reset 00:03:58.079 06:34:12 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:58.079 06:34:12 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:59.017 06:34:12 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:59.017 06:34:12 -- setup/acl.sh@16 -- # local dev driver 00:03:59.017 06:34:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:59.017 06:34:12 -- setup/acl.sh@15 -- # setup output status 00:03:59.017 06:34:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.017 06:34:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:59.017 Hugepages 00:03:59.017 node hugesize free / total 00:03:59.017 06:34:12 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:59.017 06:34:12 -- setup/acl.sh@19 -- # continue 00:03:59.017 06:34:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:59.017 00:03:59.017 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:59.017 06:34:12 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:59.017 06:34:12 -- setup/acl.sh@19 -- # continue 00:03:59.017 06:34:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:59.277 06:34:13 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:59.277 06:34:13 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:59.277 06:34:13 -- setup/acl.sh@20 -- # continue 00:03:59.277 06:34:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:59.277 06:34:13 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:03:59.277 06:34:13 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:59.277 06:34:13 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:59.277 06:34:13 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:59.277 06:34:13 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:59.277 06:34:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:59.277 06:34:13 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:03:59.277 06:34:13 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:59.277 06:34:13 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:59.277 06:34:13 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:59.277 06:34:13 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:59.277 06:34:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:59.277 06:34:13 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:59.277 06:34:13 -- setup/acl.sh@54 -- # run_test denied denied 00:03:59.277 06:34:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:59.277 06:34:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:59.277 06:34:13 -- common/autotest_common.sh@10 -- # set +x 00:03:59.277 ************************************ 00:03:59.277 START TEST denied 00:03:59.277 ************************************ 00:03:59.277 06:34:13 -- common/autotest_common.sh@1114 -- # denied 00:03:59.277 06:34:13 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:03:59.277 06:34:13 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:03:59.277 06:34:13 -- setup/acl.sh@38 -- # setup output config 00:03:59.277 06:34:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.277 06:34:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:00.215 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:00.215 06:34:14 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:00.215 06:34:14 -- setup/acl.sh@28 -- # local dev driver 00:04:00.215 06:34:14 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:00.215 06:34:14 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:00.215 06:34:14 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:00.215 06:34:14 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:00.215 06:34:14 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:00.215 06:34:14 -- setup/acl.sh@41 -- # setup reset 00:04:00.215 06:34:14 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:00.215 06:34:14 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:00.782 ************************************ 00:04:00.782 END TEST denied 00:04:00.782 ************************************ 00:04:00.782 00:04:00.783 real 0m1.480s 00:04:00.783 user 0m0.609s 00:04:00.783 sys 0m0.823s 00:04:00.783 06:34:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:00.783 06:34:14 -- common/autotest_common.sh@10 -- # set +x 00:04:00.783 06:34:14 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:00.783 06:34:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:00.783 06:34:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:00.783 06:34:14 -- common/autotest_common.sh@10 -- # set +x 00:04:00.783 ************************************ 00:04:00.783 START TEST allowed 00:04:00.783 ************************************ 00:04:00.783 06:34:14 -- common/autotest_common.sh@1114 -- # allowed 00:04:00.783 06:34:14 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:00.783 06:34:14 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:00.783 06:34:14 -- setup/acl.sh@45 -- # setup output config 00:04:00.783 06:34:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.783 06:34:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:01.718 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:01.718 06:34:15 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:04:01.718 06:34:15 -- setup/acl.sh@28 -- # local dev driver 00:04:01.718 06:34:15 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:01.718 06:34:15 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:01.718 06:34:15 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:01.718 06:34:15 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:01.718 06:34:15 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:01.718 06:34:15 -- setup/acl.sh@48 -- # setup reset 00:04:01.718 06:34:15 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:01.718 06:34:15 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:02.656 ************************************ 00:04:02.656 END TEST allowed 00:04:02.656 ************************************ 00:04:02.656 00:04:02.656 real 0m1.549s 00:04:02.656 user 0m0.671s 00:04:02.656 sys 0m0.876s 00:04:02.656 06:34:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:02.656 06:34:16 -- common/autotest_common.sh@10 -- # set +x 00:04:02.656 ************************************ 00:04:02.656 END TEST acl 00:04:02.656 ************************************ 00:04:02.656 00:04:02.656 real 0m4.442s 00:04:02.656 user 0m1.938s 00:04:02.656 sys 0m2.477s 00:04:02.656 06:34:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:02.656 06:34:16 -- common/autotest_common.sh@10 -- # set +x 00:04:02.656 06:34:16 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:02.656 06:34:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:02.656 06:34:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:02.656 06:34:16 -- common/autotest_common.sh@10 -- # set +x 00:04:02.656 ************************************ 00:04:02.656 START TEST hugepages 00:04:02.656 ************************************ 00:04:02.656 06:34:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:02.656 * Looking for test storage... 00:04:02.656 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:02.656 06:34:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:02.656 06:34:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:02.656 06:34:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:02.656 06:34:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:02.656 06:34:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:02.656 06:34:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:02.656 06:34:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:02.656 06:34:16 -- scripts/common.sh@335 -- # IFS=.-: 00:04:02.656 06:34:16 -- scripts/common.sh@335 -- # read -ra ver1 00:04:02.656 06:34:16 -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.656 06:34:16 -- scripts/common.sh@336 -- # read -ra ver2 00:04:02.656 06:34:16 -- scripts/common.sh@337 -- # local 'op=<' 00:04:02.656 06:34:16 -- scripts/common.sh@339 -- # ver1_l=2 00:04:02.656 06:34:16 -- scripts/common.sh@340 -- # ver2_l=1 00:04:02.656 06:34:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:02.656 06:34:16 -- scripts/common.sh@343 -- # case "$op" in 00:04:02.656 06:34:16 -- scripts/common.sh@344 -- # : 1 00:04:02.656 06:34:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:02.656 06:34:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.656 06:34:16 -- scripts/common.sh@364 -- # decimal 1 00:04:02.656 06:34:16 -- scripts/common.sh@352 -- # local d=1 00:04:02.656 06:34:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.656 06:34:16 -- scripts/common.sh@354 -- # echo 1 00:04:02.656 06:34:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:02.656 06:34:16 -- scripts/common.sh@365 -- # decimal 2 00:04:02.656 06:34:16 -- scripts/common.sh@352 -- # local d=2 00:04:02.656 06:34:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.656 06:34:16 -- scripts/common.sh@354 -- # echo 2 00:04:02.656 06:34:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:02.656 06:34:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:02.656 06:34:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:02.656 06:34:16 -- scripts/common.sh@367 -- # return 0 00:04:02.656 06:34:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.656 06:34:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:02.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.656 --rc genhtml_branch_coverage=1 00:04:02.656 --rc genhtml_function_coverage=1 00:04:02.656 --rc genhtml_legend=1 00:04:02.656 --rc geninfo_all_blocks=1 00:04:02.656 --rc geninfo_unexecuted_blocks=1 00:04:02.656 00:04:02.656 ' 00:04:02.656 06:34:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:02.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.656 --rc genhtml_branch_coverage=1 00:04:02.656 --rc genhtml_function_coverage=1 00:04:02.656 --rc genhtml_legend=1 00:04:02.656 --rc geninfo_all_blocks=1 00:04:02.656 --rc geninfo_unexecuted_blocks=1 00:04:02.656 00:04:02.656 ' 00:04:02.656 06:34:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:02.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.656 --rc genhtml_branch_coverage=1 00:04:02.656 --rc genhtml_function_coverage=1 00:04:02.656 --rc genhtml_legend=1 00:04:02.656 --rc geninfo_all_blocks=1 00:04:02.656 --rc geninfo_unexecuted_blocks=1 00:04:02.656 00:04:02.656 ' 00:04:02.656 06:34:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:02.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.656 --rc genhtml_branch_coverage=1 00:04:02.656 --rc genhtml_function_coverage=1 00:04:02.656 --rc genhtml_legend=1 00:04:02.656 --rc geninfo_all_blocks=1 00:04:02.656 --rc geninfo_unexecuted_blocks=1 00:04:02.656 00:04:02.656 ' 00:04:02.656 06:34:16 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:02.656 06:34:16 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:02.656 06:34:16 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:02.656 06:34:16 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:02.656 06:34:16 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:02.656 06:34:16 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:02.656 06:34:16 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:02.656 06:34:16 -- setup/common.sh@18 -- # local node= 00:04:02.656 06:34:16 -- setup/common.sh@19 -- # local var val 00:04:02.656 06:34:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.656 06:34:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.656 06:34:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.656 06:34:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.656 06:34:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.656 06:34:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.656 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.656 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.656 06:34:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 5972976 kB' 'MemAvailable: 7354904 kB' 'Buffers: 3704 kB' 'Cached: 1594692 kB' 'SwapCached: 0 kB' 'Active: 455316 kB' 'Inactive: 1260040 kB' 'Active(anon): 127468 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 118580 kB' 'Mapped: 50840 kB' 'Shmem: 10508 kB' 'KReclaimable: 62380 kB' 'Slab: 155940 kB' 'SReclaimable: 62380 kB' 'SUnreclaim: 93560 kB' 'KernelStack: 6480 kB' 'PageTables: 4588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411008 kB' 'Committed_AS: 320712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:04:02.656 06:34:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.656 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.656 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.656 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.656 06:34:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.656 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.656 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.656 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.656 06:34:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.656 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.656 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.656 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.656 06:34:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.656 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.656 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.656 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.656 06:34:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.656 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.656 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.656 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.656 06:34:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.656 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.656 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.656 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.656 06:34:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.656 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.656 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.656 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.656 06:34:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.656 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.656 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.656 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.657 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.657 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.658 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.658 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.658 06:34:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.658 06:34:16 -- setup/common.sh@32 -- # continue 00:04:02.658 06:34:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.658 06:34:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.658 06:34:16 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:02.658 06:34:16 -- setup/common.sh@33 -- # echo 2048 00:04:02.658 06:34:16 -- setup/common.sh@33 -- # return 0 00:04:02.658 06:34:16 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:02.658 06:34:16 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:02.658 06:34:16 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:02.658 06:34:16 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:02.658 06:34:16 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:02.658 06:34:16 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:02.658 06:34:16 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:02.658 06:34:16 -- setup/hugepages.sh@207 -- # get_nodes 00:04:02.658 06:34:16 -- setup/hugepages.sh@27 -- # local node 00:04:02.658 06:34:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.658 06:34:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:02.658 06:34:16 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:02.658 06:34:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.658 06:34:16 -- setup/hugepages.sh@208 -- # clear_hp 00:04:02.658 06:34:16 -- setup/hugepages.sh@37 -- # local node hp 00:04:02.658 06:34:16 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:02.658 06:34:16 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.658 06:34:16 -- setup/hugepages.sh@41 -- # echo 0 00:04:02.658 06:34:16 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.658 06:34:16 -- setup/hugepages.sh@41 -- # echo 0 00:04:02.658 06:34:16 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:02.658 06:34:16 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:02.658 06:34:16 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:02.658 06:34:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:02.658 06:34:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:02.658 06:34:16 -- common/autotest_common.sh@10 -- # set +x 00:04:02.658 ************************************ 00:04:02.658 START TEST default_setup 00:04:02.658 ************************************ 00:04:02.658 06:34:16 -- common/autotest_common.sh@1114 -- # default_setup 00:04:02.658 06:34:16 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:02.658 06:34:16 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:02.658 06:34:16 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:02.658 06:34:16 -- setup/hugepages.sh@51 -- # shift 00:04:02.658 06:34:16 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:02.658 06:34:16 -- setup/hugepages.sh@52 -- # local node_ids 00:04:02.658 06:34:16 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.658 06:34:16 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:02.658 06:34:16 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:02.658 06:34:16 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:02.658 06:34:16 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.658 06:34:16 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:02.658 06:34:16 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:02.658 06:34:16 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.658 06:34:16 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.658 06:34:16 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:02.658 06:34:16 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:02.658 06:34:16 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:02.658 06:34:16 -- setup/hugepages.sh@73 -- # return 0 00:04:02.658 06:34:16 -- setup/hugepages.sh@137 -- # setup output 00:04:02.658 06:34:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.658 06:34:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:03.596 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.596 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.596 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.596 06:34:17 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:03.596 06:34:17 -- setup/hugepages.sh@89 -- # local node 00:04:03.596 06:34:17 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.596 06:34:17 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.596 06:34:17 -- setup/hugepages.sh@92 -- # local surp 00:04:03.596 06:34:17 -- setup/hugepages.sh@93 -- # local resv 00:04:03.596 06:34:17 -- setup/hugepages.sh@94 -- # local anon 00:04:03.596 06:34:17 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.596 06:34:17 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.596 06:34:17 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.596 06:34:17 -- setup/common.sh@18 -- # local node= 00:04:03.596 06:34:17 -- setup/common.sh@19 -- # local var val 00:04:03.596 06:34:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.596 06:34:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.596 06:34:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.596 06:34:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.596 06:34:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.596 06:34:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.596 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 06:34:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8073284 kB' 'MemAvailable: 9455076 kB' 'Buffers: 3704 kB' 'Cached: 1594684 kB' 'SwapCached: 0 kB' 'Active: 456944 kB' 'Inactive: 1260056 kB' 'Active(anon): 129096 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 120248 kB' 'Mapped: 50772 kB' 'Shmem: 10484 kB' 'KReclaimable: 62080 kB' 'Slab: 155772 kB' 'SReclaimable: 62080 kB' 'SUnreclaim: 93692 kB' 'KernelStack: 6464 kB' 'PageTables: 4532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:04:03.596 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 06:34:17 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.596 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.596 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 06:34:17 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.596 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.596 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 06:34:17 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.596 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.596 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 06:34:17 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.596 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.596 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 06:34:17 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.596 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.596 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 06:34:17 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.596 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.596 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 06:34:17 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.596 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.596 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 06:34:17 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.596 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.596 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 06:34:17 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.596 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.596 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 06:34:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.596 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.596 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.596 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.596 06:34:17 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.597 06:34:17 -- setup/common.sh@33 -- # echo 0 00:04:03.597 06:34:17 -- setup/common.sh@33 -- # return 0 00:04:03.597 06:34:17 -- setup/hugepages.sh@97 -- # anon=0 00:04:03.597 06:34:17 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.597 06:34:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.597 06:34:17 -- setup/common.sh@18 -- # local node= 00:04:03.597 06:34:17 -- setup/common.sh@19 -- # local var val 00:04:03.597 06:34:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.597 06:34:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.597 06:34:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.597 06:34:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.597 06:34:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.597 06:34:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8073536 kB' 'MemAvailable: 9455328 kB' 'Buffers: 3704 kB' 'Cached: 1594684 kB' 'SwapCached: 0 kB' 'Active: 456104 kB' 'Inactive: 1260056 kB' 'Active(anon): 128256 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119360 kB' 'Mapped: 50632 kB' 'Shmem: 10484 kB' 'KReclaimable: 62080 kB' 'Slab: 155768 kB' 'SReclaimable: 62080 kB' 'SUnreclaim: 93688 kB' 'KernelStack: 6432 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.597 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.597 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.598 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.598 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.599 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.599 06:34:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.599 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.599 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.599 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.599 06:34:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.860 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.860 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.860 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.860 06:34:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.860 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.860 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.860 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.860 06:34:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.860 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.860 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.860 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.860 06:34:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.860 06:34:17 -- setup/common.sh@33 -- # echo 0 00:04:03.860 06:34:17 -- setup/common.sh@33 -- # return 0 00:04:03.860 06:34:17 -- setup/hugepages.sh@99 -- # surp=0 00:04:03.860 06:34:17 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.860 06:34:17 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.860 06:34:17 -- setup/common.sh@18 -- # local node= 00:04:03.860 06:34:17 -- setup/common.sh@19 -- # local var val 00:04:03.860 06:34:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.860 06:34:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.860 06:34:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.860 06:34:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.860 06:34:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.860 06:34:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.860 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8073796 kB' 'MemAvailable: 9455588 kB' 'Buffers: 3704 kB' 'Cached: 1594684 kB' 'SwapCached: 0 kB' 'Active: 456348 kB' 'Inactive: 1260056 kB' 'Active(anon): 128500 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119624 kB' 'Mapped: 50632 kB' 'Shmem: 10484 kB' 'KReclaimable: 62080 kB' 'Slab: 155768 kB' 'SReclaimable: 62080 kB' 'SUnreclaim: 93688 kB' 'KernelStack: 6432 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.861 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.861 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.862 06:34:17 -- setup/common.sh@33 -- # echo 0 00:04:03.862 06:34:17 -- setup/common.sh@33 -- # return 0 00:04:03.862 06:34:17 -- setup/hugepages.sh@100 -- # resv=0 00:04:03.862 06:34:17 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:03.862 nr_hugepages=1024 00:04:03.862 06:34:17 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.862 resv_hugepages=0 00:04:03.862 06:34:17 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.862 surplus_hugepages=0 00:04:03.862 06:34:17 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.862 anon_hugepages=0 00:04:03.862 06:34:17 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.862 06:34:17 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:03.862 06:34:17 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.862 06:34:17 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.862 06:34:17 -- setup/common.sh@18 -- # local node= 00:04:03.862 06:34:17 -- setup/common.sh@19 -- # local var val 00:04:03.862 06:34:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.862 06:34:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.862 06:34:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.862 06:34:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.862 06:34:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.862 06:34:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 06:34:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8074300 kB' 'MemAvailable: 9456092 kB' 'Buffers: 3704 kB' 'Cached: 1594684 kB' 'SwapCached: 0 kB' 'Active: 456260 kB' 'Inactive: 1260056 kB' 'Active(anon): 128412 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119484 kB' 'Mapped: 50632 kB' 'Shmem: 10484 kB' 'KReclaimable: 62080 kB' 'Slab: 155768 kB' 'SReclaimable: 62080 kB' 'SUnreclaim: 93688 kB' 'KernelStack: 6416 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.862 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.862 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.863 06:34:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.863 06:34:17 -- setup/common.sh@33 -- # echo 1024 00:04:03.863 06:34:17 -- setup/common.sh@33 -- # return 0 00:04:03.863 06:34:17 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.863 06:34:17 -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.863 06:34:17 -- setup/hugepages.sh@27 -- # local node 00:04:03.863 06:34:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.863 06:34:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:03.863 06:34:17 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:03.863 06:34:17 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.863 06:34:17 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.863 06:34:17 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.863 06:34:17 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.863 06:34:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.863 06:34:17 -- setup/common.sh@18 -- # local node=0 00:04:03.863 06:34:17 -- setup/common.sh@19 -- # local var val 00:04:03.863 06:34:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.863 06:34:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.863 06:34:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.863 06:34:17 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.863 06:34:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.863 06:34:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.863 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8074300 kB' 'MemUsed: 4164812 kB' 'SwapCached: 0 kB' 'Active: 456264 kB' 'Inactive: 1260056 kB' 'Active(anon): 128416 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'FilePages: 1598388 kB' 'Mapped: 50632 kB' 'AnonPages: 119492 kB' 'Shmem: 10484 kB' 'KernelStack: 6416 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62080 kB' 'Slab: 155760 kB' 'SReclaimable: 62080 kB' 'SUnreclaim: 93680 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # continue 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.864 06:34:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.864 06:34:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.864 06:34:17 -- setup/common.sh@33 -- # echo 0 00:04:03.864 06:34:17 -- setup/common.sh@33 -- # return 0 00:04:03.864 node0=1024 expecting 1024 00:04:03.864 06:34:17 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.864 06:34:17 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.864 06:34:17 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.864 06:34:17 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.864 06:34:17 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:03.864 06:34:17 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:03.864 00:04:03.864 real 0m1.089s 00:04:03.864 user 0m0.508s 00:04:03.864 sys 0m0.473s 00:04:03.865 ************************************ 00:04:03.865 END TEST default_setup 00:04:03.865 ************************************ 00:04:03.865 06:34:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:03.865 06:34:17 -- common/autotest_common.sh@10 -- # set +x 00:04:03.865 06:34:17 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:03.865 06:34:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:03.865 06:34:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:03.865 06:34:17 -- common/autotest_common.sh@10 -- # set +x 00:04:03.865 ************************************ 00:04:03.865 START TEST per_node_1G_alloc 00:04:03.865 ************************************ 00:04:03.865 06:34:17 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:03.865 06:34:17 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:03.865 06:34:17 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:03.865 06:34:17 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:03.865 06:34:17 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:03.865 06:34:17 -- setup/hugepages.sh@51 -- # shift 00:04:03.865 06:34:17 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:03.865 06:34:17 -- setup/hugepages.sh@52 -- # local node_ids 00:04:03.865 06:34:17 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.865 06:34:17 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:03.865 06:34:17 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:03.865 06:34:17 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:03.865 06:34:17 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.865 06:34:17 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:03.865 06:34:17 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:03.865 06:34:17 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.865 06:34:17 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.865 06:34:17 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:03.865 06:34:17 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:03.865 06:34:17 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:03.865 06:34:17 -- setup/hugepages.sh@73 -- # return 0 00:04:03.865 06:34:17 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:03.865 06:34:17 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:03.865 06:34:17 -- setup/hugepages.sh@146 -- # setup output 00:04:03.865 06:34:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.865 06:34:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:04.124 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.405 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.405 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.405 06:34:18 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:04.405 06:34:18 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:04.405 06:34:18 -- setup/hugepages.sh@89 -- # local node 00:04:04.405 06:34:18 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.405 06:34:18 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.405 06:34:18 -- setup/hugepages.sh@92 -- # local surp 00:04:04.405 06:34:18 -- setup/hugepages.sh@93 -- # local resv 00:04:04.405 06:34:18 -- setup/hugepages.sh@94 -- # local anon 00:04:04.405 06:34:18 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.405 06:34:18 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.405 06:34:18 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.405 06:34:18 -- setup/common.sh@18 -- # local node= 00:04:04.405 06:34:18 -- setup/common.sh@19 -- # local var val 00:04:04.405 06:34:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.405 06:34:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.405 06:34:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.405 06:34:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.405 06:34:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.405 06:34:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.405 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 06:34:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 9127696 kB' 'MemAvailable: 10509488 kB' 'Buffers: 3704 kB' 'Cached: 1594684 kB' 'SwapCached: 0 kB' 'Active: 456528 kB' 'Inactive: 1260056 kB' 'Active(anon): 128680 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119540 kB' 'Mapped: 50728 kB' 'Shmem: 10484 kB' 'KReclaimable: 62080 kB' 'Slab: 155812 kB' 'SReclaimable: 62080 kB' 'SUnreclaim: 93732 kB' 'KernelStack: 6472 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:04:04.405 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 06:34:18 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.405 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.405 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 06:34:18 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.405 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.405 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 06:34:18 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.405 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.405 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 06:34:18 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.405 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.405 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 06:34:18 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.405 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.405 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 06:34:18 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.405 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.405 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 06:34:18 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.405 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.405 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 06:34:18 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.405 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.405 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 06:34:18 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.405 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.405 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 06:34:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.405 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.405 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 06:34:18 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.405 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.405 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 06:34:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.405 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.405 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 06:34:18 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.405 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.405 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 06:34:18 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.405 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.405 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.405 06:34:18 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.405 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.405 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.405 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.406 06:34:18 -- setup/common.sh@33 -- # echo 0 00:04:04.406 06:34:18 -- setup/common.sh@33 -- # return 0 00:04:04.406 06:34:18 -- setup/hugepages.sh@97 -- # anon=0 00:04:04.406 06:34:18 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.406 06:34:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.406 06:34:18 -- setup/common.sh@18 -- # local node= 00:04:04.406 06:34:18 -- setup/common.sh@19 -- # local var val 00:04:04.406 06:34:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.406 06:34:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.406 06:34:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.406 06:34:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.406 06:34:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.406 06:34:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 9127696 kB' 'MemAvailable: 10509488 kB' 'Buffers: 3704 kB' 'Cached: 1594684 kB' 'SwapCached: 0 kB' 'Active: 456256 kB' 'Inactive: 1260056 kB' 'Active(anon): 128408 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119536 kB' 'Mapped: 50660 kB' 'Shmem: 10484 kB' 'KReclaimable: 62080 kB' 'Slab: 155804 kB' 'SReclaimable: 62080 kB' 'SUnreclaim: 93724 kB' 'KernelStack: 6416 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.406 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.406 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.407 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.407 06:34:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.407 06:34:18 -- setup/common.sh@33 -- # echo 0 00:04:04.408 06:34:18 -- setup/common.sh@33 -- # return 0 00:04:04.408 06:34:18 -- setup/hugepages.sh@99 -- # surp=0 00:04:04.408 06:34:18 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.408 06:34:18 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.408 06:34:18 -- setup/common.sh@18 -- # local node= 00:04:04.408 06:34:18 -- setup/common.sh@19 -- # local var val 00:04:04.408 06:34:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.408 06:34:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.408 06:34:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.408 06:34:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.408 06:34:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.408 06:34:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 9127956 kB' 'MemAvailable: 10509748 kB' 'Buffers: 3704 kB' 'Cached: 1594684 kB' 'SwapCached: 0 kB' 'Active: 456396 kB' 'Inactive: 1260056 kB' 'Active(anon): 128548 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119652 kB' 'Mapped: 50660 kB' 'Shmem: 10484 kB' 'KReclaimable: 62080 kB' 'Slab: 155804 kB' 'SReclaimable: 62080 kB' 'SUnreclaim: 93724 kB' 'KernelStack: 6432 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.408 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.408 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.409 06:34:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.409 06:34:18 -- setup/common.sh@33 -- # echo 0 00:04:04.409 06:34:18 -- setup/common.sh@33 -- # return 0 00:04:04.409 nr_hugepages=512 00:04:04.409 resv_hugepages=0 00:04:04.409 surplus_hugepages=0 00:04:04.409 anon_hugepages=0 00:04:04.409 06:34:18 -- setup/hugepages.sh@100 -- # resv=0 00:04:04.409 06:34:18 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:04.409 06:34:18 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.409 06:34:18 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.409 06:34:18 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.409 06:34:18 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:04.409 06:34:18 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:04.409 06:34:18 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.409 06:34:18 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.409 06:34:18 -- setup/common.sh@18 -- # local node= 00:04:04.409 06:34:18 -- setup/common.sh@19 -- # local var val 00:04:04.409 06:34:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.409 06:34:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.409 06:34:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.409 06:34:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.409 06:34:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.409 06:34:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.409 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 9128756 kB' 'MemAvailable: 10510548 kB' 'Buffers: 3704 kB' 'Cached: 1594684 kB' 'SwapCached: 0 kB' 'Active: 456468 kB' 'Inactive: 1260056 kB' 'Active(anon): 128620 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119712 kB' 'Mapped: 50660 kB' 'Shmem: 10484 kB' 'KReclaimable: 62080 kB' 'Slab: 155804 kB' 'SReclaimable: 62080 kB' 'SUnreclaim: 93724 kB' 'KernelStack: 6400 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.410 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.410 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.411 06:34:18 -- setup/common.sh@33 -- # echo 512 00:04:04.411 06:34:18 -- setup/common.sh@33 -- # return 0 00:04:04.411 06:34:18 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:04.411 06:34:18 -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.411 06:34:18 -- setup/hugepages.sh@27 -- # local node 00:04:04.411 06:34:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.411 06:34:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:04.411 06:34:18 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:04.411 06:34:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.411 06:34:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.411 06:34:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.411 06:34:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.411 06:34:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.411 06:34:18 -- setup/common.sh@18 -- # local node=0 00:04:04.411 06:34:18 -- setup/common.sh@19 -- # local var val 00:04:04.411 06:34:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.411 06:34:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.411 06:34:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.411 06:34:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.411 06:34:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.411 06:34:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 9128756 kB' 'MemUsed: 3110356 kB' 'SwapCached: 0 kB' 'Active: 456260 kB' 'Inactive: 1260056 kB' 'Active(anon): 128412 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'FilePages: 1598388 kB' 'Mapped: 50660 kB' 'AnonPages: 119592 kB' 'Shmem: 10484 kB' 'KernelStack: 6416 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62080 kB' 'Slab: 155804 kB' 'SReclaimable: 62080 kB' 'SUnreclaim: 93724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.411 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.411 06:34:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.412 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.412 06:34:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.412 06:34:18 -- setup/common.sh@33 -- # echo 0 00:04:04.412 06:34:18 -- setup/common.sh@33 -- # return 0 00:04:04.412 06:34:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.412 06:34:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.412 node0=512 expecting 512 00:04:04.412 ************************************ 00:04:04.412 END TEST per_node_1G_alloc 00:04:04.412 ************************************ 00:04:04.412 06:34:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.412 06:34:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.412 06:34:18 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:04.412 06:34:18 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:04.412 00:04:04.412 real 0m0.606s 00:04:04.412 user 0m0.280s 00:04:04.412 sys 0m0.311s 00:04:04.412 06:34:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:04.412 06:34:18 -- common/autotest_common.sh@10 -- # set +x 00:04:04.678 06:34:18 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:04.678 06:34:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:04.678 06:34:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:04.678 06:34:18 -- common/autotest_common.sh@10 -- # set +x 00:04:04.678 ************************************ 00:04:04.678 START TEST even_2G_alloc 00:04:04.678 ************************************ 00:04:04.678 06:34:18 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:04.678 06:34:18 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:04.678 06:34:18 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:04.678 06:34:18 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:04.678 06:34:18 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:04.678 06:34:18 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:04.678 06:34:18 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:04.678 06:34:18 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:04.678 06:34:18 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.678 06:34:18 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:04.678 06:34:18 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:04.678 06:34:18 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.678 06:34:18 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.678 06:34:18 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:04.678 06:34:18 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:04.678 06:34:18 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.678 06:34:18 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:04.678 06:34:18 -- setup/hugepages.sh@83 -- # : 0 00:04:04.678 06:34:18 -- setup/hugepages.sh@84 -- # : 0 00:04:04.678 06:34:18 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.678 06:34:18 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:04.678 06:34:18 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:04.678 06:34:18 -- setup/hugepages.sh@153 -- # setup output 00:04:04.678 06:34:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.678 06:34:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:04.939 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.939 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.939 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.939 06:34:18 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:04.939 06:34:18 -- setup/hugepages.sh@89 -- # local node 00:04:04.939 06:34:18 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.939 06:34:18 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.939 06:34:18 -- setup/hugepages.sh@92 -- # local surp 00:04:04.939 06:34:18 -- setup/hugepages.sh@93 -- # local resv 00:04:04.939 06:34:18 -- setup/hugepages.sh@94 -- # local anon 00:04:04.939 06:34:18 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.939 06:34:18 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.939 06:34:18 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.939 06:34:18 -- setup/common.sh@18 -- # local node= 00:04:04.939 06:34:18 -- setup/common.sh@19 -- # local var val 00:04:04.939 06:34:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.939 06:34:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.939 06:34:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.939 06:34:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.939 06:34:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.939 06:34:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.939 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.939 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8089324 kB' 'MemAvailable: 9471116 kB' 'Buffers: 3704 kB' 'Cached: 1594684 kB' 'SwapCached: 0 kB' 'Active: 456328 kB' 'Inactive: 1260056 kB' 'Active(anon): 128480 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119600 kB' 'Mapped: 50792 kB' 'Shmem: 10484 kB' 'KReclaimable: 62080 kB' 'Slab: 155844 kB' 'SReclaimable: 62080 kB' 'SUnreclaim: 93764 kB' 'KernelStack: 6432 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.940 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.940 06:34:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.940 06:34:18 -- setup/common.sh@33 -- # echo 0 00:04:04.940 06:34:18 -- setup/common.sh@33 -- # return 0 00:04:04.941 06:34:18 -- setup/hugepages.sh@97 -- # anon=0 00:04:04.941 06:34:18 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.941 06:34:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.941 06:34:18 -- setup/common.sh@18 -- # local node= 00:04:04.941 06:34:18 -- setup/common.sh@19 -- # local var val 00:04:04.941 06:34:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.941 06:34:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.941 06:34:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.941 06:34:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.941 06:34:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.941 06:34:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.941 06:34:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8089072 kB' 'MemAvailable: 9470864 kB' 'Buffers: 3704 kB' 'Cached: 1594684 kB' 'SwapCached: 0 kB' 'Active: 456332 kB' 'Inactive: 1260056 kB' 'Active(anon): 128484 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119644 kB' 'Mapped: 50664 kB' 'Shmem: 10484 kB' 'KReclaimable: 62080 kB' 'Slab: 155844 kB' 'SReclaimable: 62080 kB' 'SUnreclaim: 93764 kB' 'KernelStack: 6432 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.941 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.941 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # continue 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.942 06:34:18 -- setup/common.sh@33 -- # echo 0 00:04:04.942 06:34:18 -- setup/common.sh@33 -- # return 0 00:04:04.942 06:34:18 -- setup/hugepages.sh@99 -- # surp=0 00:04:04.942 06:34:18 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.942 06:34:18 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.942 06:34:18 -- setup/common.sh@18 -- # local node= 00:04:04.942 06:34:18 -- setup/common.sh@19 -- # local var val 00:04:04.942 06:34:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.942 06:34:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.942 06:34:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.942 06:34:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.942 06:34:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.942 06:34:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.942 06:34:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8089072 kB' 'MemAvailable: 9470864 kB' 'Buffers: 3704 kB' 'Cached: 1594684 kB' 'SwapCached: 0 kB' 'Active: 456280 kB' 'Inactive: 1260056 kB' 'Active(anon): 128432 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119520 kB' 'Mapped: 50664 kB' 'Shmem: 10484 kB' 'KReclaimable: 62080 kB' 'Slab: 155840 kB' 'SReclaimable: 62080 kB' 'SUnreclaim: 93760 kB' 'KernelStack: 6416 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55112 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:04:04.942 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.942 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.204 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.204 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.205 06:34:18 -- setup/common.sh@33 -- # echo 0 00:04:05.205 06:34:18 -- setup/common.sh@33 -- # return 0 00:04:05.205 06:34:18 -- setup/hugepages.sh@100 -- # resv=0 00:04:05.205 nr_hugepages=1024 00:04:05.205 06:34:18 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:05.205 resv_hugepages=0 00:04:05.205 surplus_hugepages=0 00:04:05.205 anon_hugepages=0 00:04:05.205 06:34:18 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.205 06:34:18 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.205 06:34:18 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.205 06:34:18 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.205 06:34:18 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:05.205 06:34:18 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.205 06:34:18 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.205 06:34:18 -- setup/common.sh@18 -- # local node= 00:04:05.205 06:34:18 -- setup/common.sh@19 -- # local var val 00:04:05.205 06:34:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.205 06:34:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.205 06:34:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.205 06:34:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.205 06:34:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.205 06:34:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 06:34:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8089072 kB' 'MemAvailable: 9470864 kB' 'Buffers: 3704 kB' 'Cached: 1594684 kB' 'SwapCached: 0 kB' 'Active: 456316 kB' 'Inactive: 1260056 kB' 'Active(anon): 128468 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'AnonPages: 119568 kB' 'Mapped: 50664 kB' 'Shmem: 10484 kB' 'KReclaimable: 62080 kB' 'Slab: 155836 kB' 'SReclaimable: 62080 kB' 'SUnreclaim: 93756 kB' 'KernelStack: 6416 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.205 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.205 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:18 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.206 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.206 06:34:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.206 06:34:19 -- setup/common.sh@33 -- # echo 1024 00:04:05.206 06:34:19 -- setup/common.sh@33 -- # return 0 00:04:05.206 06:34:19 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.206 06:34:19 -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.206 06:34:19 -- setup/hugepages.sh@27 -- # local node 00:04:05.206 06:34:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.206 06:34:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:05.206 06:34:19 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:05.206 06:34:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.206 06:34:19 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.206 06:34:19 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.206 06:34:19 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.206 06:34:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.206 06:34:19 -- setup/common.sh@18 -- # local node=0 00:04:05.206 06:34:19 -- setup/common.sh@19 -- # local var val 00:04:05.206 06:34:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.206 06:34:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.206 06:34:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.206 06:34:19 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.206 06:34:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.207 06:34:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8089852 kB' 'MemUsed: 4149260 kB' 'SwapCached: 0 kB' 'Active: 456480 kB' 'Inactive: 1260056 kB' 'Active(anon): 128632 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260056 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 340 kB' 'Writeback: 0 kB' 'FilePages: 1598388 kB' 'Mapped: 50664 kB' 'AnonPages: 119712 kB' 'Shmem: 10484 kB' 'KernelStack: 6448 kB' 'PageTables: 4500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62080 kB' 'Slab: 155836 kB' 'SReclaimable: 62080 kB' 'SUnreclaim: 93756 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.207 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.207 06:34:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.207 06:34:19 -- setup/common.sh@33 -- # echo 0 00:04:05.207 06:34:19 -- setup/common.sh@33 -- # return 0 00:04:05.207 node0=1024 expecting 1024 00:04:05.207 ************************************ 00:04:05.207 END TEST even_2G_alloc 00:04:05.207 ************************************ 00:04:05.207 06:34:19 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.208 06:34:19 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.208 06:34:19 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.208 06:34:19 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.208 06:34:19 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:05.208 06:34:19 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:05.208 00:04:05.208 real 0m0.630s 00:04:05.208 user 0m0.305s 00:04:05.208 sys 0m0.319s 00:04:05.208 06:34:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:05.208 06:34:19 -- common/autotest_common.sh@10 -- # set +x 00:04:05.208 06:34:19 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:05.208 06:34:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:05.208 06:34:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:05.208 06:34:19 -- common/autotest_common.sh@10 -- # set +x 00:04:05.208 ************************************ 00:04:05.208 START TEST odd_alloc 00:04:05.208 ************************************ 00:04:05.208 06:34:19 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:05.208 06:34:19 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:05.208 06:34:19 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:05.208 06:34:19 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:05.208 06:34:19 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:05.208 06:34:19 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:05.208 06:34:19 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:05.208 06:34:19 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:05.208 06:34:19 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.208 06:34:19 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:05.208 06:34:19 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:05.208 06:34:19 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.208 06:34:19 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.208 06:34:19 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:05.208 06:34:19 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:05.208 06:34:19 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.208 06:34:19 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:05.208 06:34:19 -- setup/hugepages.sh@83 -- # : 0 00:04:05.208 06:34:19 -- setup/hugepages.sh@84 -- # : 0 00:04:05.208 06:34:19 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.208 06:34:19 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:05.208 06:34:19 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:05.208 06:34:19 -- setup/hugepages.sh@160 -- # setup output 00:04:05.208 06:34:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.208 06:34:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:05.467 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.729 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.729 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.729 06:34:19 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:05.729 06:34:19 -- setup/hugepages.sh@89 -- # local node 00:04:05.729 06:34:19 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.729 06:34:19 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.729 06:34:19 -- setup/hugepages.sh@92 -- # local surp 00:04:05.729 06:34:19 -- setup/hugepages.sh@93 -- # local resv 00:04:05.729 06:34:19 -- setup/hugepages.sh@94 -- # local anon 00:04:05.729 06:34:19 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.729 06:34:19 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.729 06:34:19 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.729 06:34:19 -- setup/common.sh@18 -- # local node= 00:04:05.729 06:34:19 -- setup/common.sh@19 -- # local var val 00:04:05.729 06:34:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.729 06:34:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.729 06:34:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.729 06:34:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.729 06:34:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.729 06:34:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.729 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8085252 kB' 'MemAvailable: 9467048 kB' 'Buffers: 3704 kB' 'Cached: 1594688 kB' 'SwapCached: 0 kB' 'Active: 456556 kB' 'Inactive: 1260060 kB' 'Active(anon): 128708 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119844 kB' 'Mapped: 50752 kB' 'Shmem: 10484 kB' 'KReclaimable: 62080 kB' 'Slab: 155864 kB' 'SReclaimable: 62080 kB' 'SUnreclaim: 93784 kB' 'KernelStack: 6424 kB' 'PageTables: 4528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 321956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55192 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.730 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.730 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.731 06:34:19 -- setup/common.sh@33 -- # echo 0 00:04:05.731 06:34:19 -- setup/common.sh@33 -- # return 0 00:04:05.731 06:34:19 -- setup/hugepages.sh@97 -- # anon=0 00:04:05.731 06:34:19 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.731 06:34:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.731 06:34:19 -- setup/common.sh@18 -- # local node= 00:04:05.731 06:34:19 -- setup/common.sh@19 -- # local var val 00:04:05.731 06:34:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.731 06:34:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.731 06:34:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.731 06:34:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.731 06:34:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.731 06:34:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8085000 kB' 'MemAvailable: 9466796 kB' 'Buffers: 3704 kB' 'Cached: 1594688 kB' 'SwapCached: 0 kB' 'Active: 456364 kB' 'Inactive: 1260060 kB' 'Active(anon): 128516 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119656 kB' 'Mapped: 50632 kB' 'Shmem: 10484 kB' 'KReclaimable: 62080 kB' 'Slab: 155864 kB' 'SReclaimable: 62080 kB' 'SUnreclaim: 93784 kB' 'KernelStack: 6432 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 321956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.731 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.731 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.732 06:34:19 -- setup/common.sh@33 -- # echo 0 00:04:05.732 06:34:19 -- setup/common.sh@33 -- # return 0 00:04:05.732 06:34:19 -- setup/hugepages.sh@99 -- # surp=0 00:04:05.732 06:34:19 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.732 06:34:19 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.732 06:34:19 -- setup/common.sh@18 -- # local node= 00:04:05.732 06:34:19 -- setup/common.sh@19 -- # local var val 00:04:05.732 06:34:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.732 06:34:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.732 06:34:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.732 06:34:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.732 06:34:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.732 06:34:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8085256 kB' 'MemAvailable: 9467052 kB' 'Buffers: 3704 kB' 'Cached: 1594688 kB' 'SwapCached: 0 kB' 'Active: 456468 kB' 'Inactive: 1260060 kB' 'Active(anon): 128620 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119740 kB' 'Mapped: 50632 kB' 'Shmem: 10484 kB' 'KReclaimable: 62080 kB' 'Slab: 155864 kB' 'SReclaimable: 62080 kB' 'SUnreclaim: 93784 kB' 'KernelStack: 6432 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 321956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55176 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.732 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.732 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.733 06:34:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.733 06:34:19 -- setup/common.sh@33 -- # echo 0 00:04:05.733 06:34:19 -- setup/common.sh@33 -- # return 0 00:04:05.733 nr_hugepages=1025 00:04:05.733 resv_hugepages=0 00:04:05.733 surplus_hugepages=0 00:04:05.733 anon_hugepages=0 00:04:05.733 06:34:19 -- setup/hugepages.sh@100 -- # resv=0 00:04:05.733 06:34:19 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:05.733 06:34:19 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.733 06:34:19 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.733 06:34:19 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.733 06:34:19 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:05.733 06:34:19 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:05.733 06:34:19 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.733 06:34:19 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.733 06:34:19 -- setup/common.sh@18 -- # local node= 00:04:05.733 06:34:19 -- setup/common.sh@19 -- # local var val 00:04:05.733 06:34:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.733 06:34:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.733 06:34:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.733 06:34:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.733 06:34:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.733 06:34:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.733 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 06:34:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8085916 kB' 'MemAvailable: 9467712 kB' 'Buffers: 3704 kB' 'Cached: 1594688 kB' 'SwapCached: 0 kB' 'Active: 456344 kB' 'Inactive: 1260060 kB' 'Active(anon): 128496 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119580 kB' 'Mapped: 50632 kB' 'Shmem: 10484 kB' 'KReclaimable: 62080 kB' 'Slab: 155864 kB' 'SReclaimable: 62080 kB' 'SUnreclaim: 93784 kB' 'KernelStack: 6400 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 321956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.734 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.734 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.735 06:34:19 -- setup/common.sh@33 -- # echo 1025 00:04:05.735 06:34:19 -- setup/common.sh@33 -- # return 0 00:04:05.735 06:34:19 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:05.735 06:34:19 -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.735 06:34:19 -- setup/hugepages.sh@27 -- # local node 00:04:05.735 06:34:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.735 06:34:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:05.735 06:34:19 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:05.735 06:34:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.735 06:34:19 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.735 06:34:19 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.735 06:34:19 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.735 06:34:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.735 06:34:19 -- setup/common.sh@18 -- # local node=0 00:04:05.735 06:34:19 -- setup/common.sh@19 -- # local var val 00:04:05.735 06:34:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.735 06:34:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.735 06:34:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.735 06:34:19 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.735 06:34:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.735 06:34:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8085664 kB' 'MemUsed: 4153448 kB' 'SwapCached: 0 kB' 'Active: 456300 kB' 'Inactive: 1260060 kB' 'Active(anon): 128452 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 1598392 kB' 'Mapped: 50632 kB' 'AnonPages: 119540 kB' 'Shmem: 10484 kB' 'KernelStack: 6416 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62080 kB' 'Slab: 155860 kB' 'SReclaimable: 62080 kB' 'SUnreclaim: 93780 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.735 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.735 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.736 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.736 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.995 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.995 06:34:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.995 06:34:19 -- setup/common.sh@32 -- # continue 00:04:05.995 06:34:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.995 06:34:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.995 06:34:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.995 06:34:19 -- setup/common.sh@33 -- # echo 0 00:04:05.995 06:34:19 -- setup/common.sh@33 -- # return 0 00:04:05.995 node0=1025 expecting 1025 00:04:05.995 ************************************ 00:04:05.995 END TEST odd_alloc 00:04:05.996 ************************************ 00:04:05.996 06:34:19 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.996 06:34:19 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.996 06:34:19 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.996 06:34:19 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.996 06:34:19 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:05.996 06:34:19 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:05.996 00:04:05.996 real 0m0.615s 00:04:05.996 user 0m0.296s 00:04:05.996 sys 0m0.311s 00:04:05.996 06:34:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:05.996 06:34:19 -- common/autotest_common.sh@10 -- # set +x 00:04:05.996 06:34:19 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:05.996 06:34:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:05.996 06:34:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:05.996 06:34:19 -- common/autotest_common.sh@10 -- # set +x 00:04:05.996 ************************************ 00:04:05.996 START TEST custom_alloc 00:04:05.996 ************************************ 00:04:05.996 06:34:19 -- common/autotest_common.sh@1114 -- # custom_alloc 00:04:05.996 06:34:19 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:05.996 06:34:19 -- setup/hugepages.sh@169 -- # local node 00:04:05.996 06:34:19 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:05.996 06:34:19 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:05.996 06:34:19 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:05.996 06:34:19 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:05.996 06:34:19 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:05.996 06:34:19 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:05.996 06:34:19 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:05.996 06:34:19 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:05.996 06:34:19 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:05.996 06:34:19 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:05.996 06:34:19 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.996 06:34:19 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:05.996 06:34:19 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:05.996 06:34:19 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.996 06:34:19 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.996 06:34:19 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:05.996 06:34:19 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:05.996 06:34:19 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.996 06:34:19 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:05.996 06:34:19 -- setup/hugepages.sh@83 -- # : 0 00:04:05.996 06:34:19 -- setup/hugepages.sh@84 -- # : 0 00:04:05.996 06:34:19 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.996 06:34:19 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:05.996 06:34:19 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:05.996 06:34:19 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:05.996 06:34:19 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:05.996 06:34:19 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:05.996 06:34:19 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:05.996 06:34:19 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:05.996 06:34:19 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.996 06:34:19 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:05.996 06:34:19 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:05.996 06:34:19 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.996 06:34:19 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.996 06:34:19 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:05.996 06:34:19 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:05.996 06:34:19 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:05.996 06:34:19 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:05.996 06:34:19 -- setup/hugepages.sh@78 -- # return 0 00:04:05.996 06:34:19 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:05.996 06:34:19 -- setup/hugepages.sh@187 -- # setup output 00:04:05.996 06:34:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.996 06:34:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:06.257 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:06.257 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.257 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.257 06:34:20 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:06.257 06:34:20 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:06.257 06:34:20 -- setup/hugepages.sh@89 -- # local node 00:04:06.257 06:34:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.257 06:34:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.257 06:34:20 -- setup/hugepages.sh@92 -- # local surp 00:04:06.257 06:34:20 -- setup/hugepages.sh@93 -- # local resv 00:04:06.257 06:34:20 -- setup/hugepages.sh@94 -- # local anon 00:04:06.257 06:34:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.257 06:34:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.257 06:34:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.257 06:34:20 -- setup/common.sh@18 -- # local node= 00:04:06.257 06:34:20 -- setup/common.sh@19 -- # local var val 00:04:06.257 06:34:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.257 06:34:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.257 06:34:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.257 06:34:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.257 06:34:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.257 06:34:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 06:34:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 9138244 kB' 'MemAvailable: 10520040 kB' 'Buffers: 3704 kB' 'Cached: 1594688 kB' 'SwapCached: 0 kB' 'Active: 457044 kB' 'Inactive: 1260060 kB' 'Active(anon): 129196 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 120356 kB' 'Mapped: 50748 kB' 'Shmem: 10484 kB' 'KReclaimable: 62080 kB' 'Slab: 155892 kB' 'SReclaimable: 62080 kB' 'SUnreclaim: 93812 kB' 'KernelStack: 6472 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.257 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.258 06:34:20 -- setup/common.sh@33 -- # echo 0 00:04:06.258 06:34:20 -- setup/common.sh@33 -- # return 0 00:04:06.258 06:34:20 -- setup/hugepages.sh@97 -- # anon=0 00:04:06.258 06:34:20 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.258 06:34:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.258 06:34:20 -- setup/common.sh@18 -- # local node= 00:04:06.258 06:34:20 -- setup/common.sh@19 -- # local var val 00:04:06.258 06:34:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.258 06:34:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.258 06:34:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.258 06:34:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.258 06:34:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.258 06:34:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 9137996 kB' 'MemAvailable: 10519792 kB' 'Buffers: 3704 kB' 'Cached: 1594688 kB' 'SwapCached: 0 kB' 'Active: 456332 kB' 'Inactive: 1260060 kB' 'Active(anon): 128484 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119540 kB' 'Mapped: 50628 kB' 'Shmem: 10484 kB' 'KReclaimable: 62080 kB' 'Slab: 155896 kB' 'SReclaimable: 62080 kB' 'SUnreclaim: 93816 kB' 'KernelStack: 6400 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.258 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.259 06:34:20 -- setup/common.sh@33 -- # echo 0 00:04:06.259 06:34:20 -- setup/common.sh@33 -- # return 0 00:04:06.259 06:34:20 -- setup/hugepages.sh@99 -- # surp=0 00:04:06.259 06:34:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.259 06:34:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.259 06:34:20 -- setup/common.sh@18 -- # local node= 00:04:06.259 06:34:20 -- setup/common.sh@19 -- # local var val 00:04:06.259 06:34:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.259 06:34:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.259 06:34:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.259 06:34:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.259 06:34:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.259 06:34:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 9137996 kB' 'MemAvailable: 10519792 kB' 'Buffers: 3704 kB' 'Cached: 1594688 kB' 'SwapCached: 0 kB' 'Active: 456356 kB' 'Inactive: 1260060 kB' 'Active(anon): 128508 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119612 kB' 'Mapped: 50888 kB' 'Shmem: 10484 kB' 'KReclaimable: 62080 kB' 'Slab: 155896 kB' 'SReclaimable: 62080 kB' 'SUnreclaim: 93816 kB' 'KernelStack: 6416 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.259 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.259 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 06:34:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.260 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.260 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 06:34:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.260 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.260 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 06:34:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.260 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.260 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 06:34:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.260 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.260 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 06:34:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.260 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.260 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 06:34:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.260 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.260 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.521 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.521 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.522 06:34:20 -- setup/common.sh@33 -- # echo 0 00:04:06.522 06:34:20 -- setup/common.sh@33 -- # return 0 00:04:06.522 nr_hugepages=512 00:04:06.522 resv_hugepages=0 00:04:06.522 06:34:20 -- setup/hugepages.sh@100 -- # resv=0 00:04:06.522 06:34:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:06.522 06:34:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.522 surplus_hugepages=0 00:04:06.522 06:34:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.522 anon_hugepages=0 00:04:06.522 06:34:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.522 06:34:20 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:06.522 06:34:20 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:06.522 06:34:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.522 06:34:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.522 06:34:20 -- setup/common.sh@18 -- # local node= 00:04:06.522 06:34:20 -- setup/common.sh@19 -- # local var val 00:04:06.522 06:34:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.522 06:34:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.522 06:34:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.522 06:34:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.522 06:34:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.522 06:34:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 06:34:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 9137996 kB' 'MemAvailable: 10519792 kB' 'Buffers: 3704 kB' 'Cached: 1594688 kB' 'SwapCached: 0 kB' 'Active: 456008 kB' 'Inactive: 1260060 kB' 'Active(anon): 128160 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 119564 kB' 'Mapped: 50628 kB' 'Shmem: 10484 kB' 'KReclaimable: 62080 kB' 'Slab: 155892 kB' 'SReclaimable: 62080 kB' 'SUnreclaim: 93812 kB' 'KernelStack: 6416 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 06:34:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.523 06:34:20 -- setup/common.sh@33 -- # echo 512 00:04:06.523 06:34:20 -- setup/common.sh@33 -- # return 0 00:04:06.523 06:34:20 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:06.523 06:34:20 -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.523 06:34:20 -- setup/hugepages.sh@27 -- # local node 00:04:06.523 06:34:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.523 06:34:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:06.523 06:34:20 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:06.523 06:34:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.523 06:34:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.523 06:34:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.523 06:34:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.523 06:34:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.523 06:34:20 -- setup/common.sh@18 -- # local node=0 00:04:06.523 06:34:20 -- setup/common.sh@19 -- # local var val 00:04:06.523 06:34:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.523 06:34:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.523 06:34:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.523 06:34:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.523 06:34:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.523 06:34:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 9137996 kB' 'MemUsed: 3101116 kB' 'SwapCached: 0 kB' 'Active: 456364 kB' 'Inactive: 1260060 kB' 'Active(anon): 128516 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 1598392 kB' 'Mapped: 50628 kB' 'AnonPages: 119668 kB' 'Shmem: 10484 kB' 'KernelStack: 6432 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62080 kB' 'Slab: 155892 kB' 'SReclaimable: 62080 kB' 'SUnreclaim: 93812 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.523 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.523 06:34:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # continue 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.524 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.524 06:34:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.524 06:34:20 -- setup/common.sh@33 -- # echo 0 00:04:06.524 06:34:20 -- setup/common.sh@33 -- # return 0 00:04:06.524 06:34:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.524 06:34:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.524 06:34:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.524 06:34:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.524 06:34:20 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:06.524 node0=512 expecting 512 00:04:06.524 06:34:20 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:06.524 ************************************ 00:04:06.524 END TEST custom_alloc 00:04:06.524 ************************************ 00:04:06.524 00:04:06.524 real 0m0.577s 00:04:06.524 user 0m0.269s 00:04:06.524 sys 0m0.318s 00:04:06.524 06:34:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:06.524 06:34:20 -- common/autotest_common.sh@10 -- # set +x 00:04:06.524 06:34:20 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:06.524 06:34:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:06.524 06:34:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:06.524 06:34:20 -- common/autotest_common.sh@10 -- # set +x 00:04:06.524 ************************************ 00:04:06.524 START TEST no_shrink_alloc 00:04:06.524 ************************************ 00:04:06.524 06:34:20 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:04:06.524 06:34:20 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:06.524 06:34:20 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:06.524 06:34:20 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:06.524 06:34:20 -- setup/hugepages.sh@51 -- # shift 00:04:06.524 06:34:20 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:06.524 06:34:20 -- setup/hugepages.sh@52 -- # local node_ids 00:04:06.524 06:34:20 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:06.524 06:34:20 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:06.524 06:34:20 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:06.524 06:34:20 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:06.524 06:34:20 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.524 06:34:20 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:06.524 06:34:20 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:06.524 06:34:20 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.524 06:34:20 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.524 06:34:20 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:06.524 06:34:20 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:06.524 06:34:20 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:06.524 06:34:20 -- setup/hugepages.sh@73 -- # return 0 00:04:06.524 06:34:20 -- setup/hugepages.sh@198 -- # setup output 00:04:06.524 06:34:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.524 06:34:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:06.784 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:06.784 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.784 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:07.046 06:34:20 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:07.046 06:34:20 -- setup/hugepages.sh@89 -- # local node 00:04:07.046 06:34:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:07.046 06:34:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:07.046 06:34:20 -- setup/hugepages.sh@92 -- # local surp 00:04:07.046 06:34:20 -- setup/hugepages.sh@93 -- # local resv 00:04:07.046 06:34:20 -- setup/hugepages.sh@94 -- # local anon 00:04:07.046 06:34:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:07.046 06:34:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:07.046 06:34:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:07.046 06:34:20 -- setup/common.sh@18 -- # local node= 00:04:07.046 06:34:20 -- setup/common.sh@19 -- # local var val 00:04:07.046 06:34:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.046 06:34:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.046 06:34:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.046 06:34:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.046 06:34:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.046 06:34:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.046 06:34:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8097764 kB' 'MemAvailable: 9479544 kB' 'Buffers: 3704 kB' 'Cached: 1594688 kB' 'SwapCached: 0 kB' 'Active: 454196 kB' 'Inactive: 1260060 kB' 'Active(anon): 126348 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 117476 kB' 'Mapped: 50108 kB' 'Shmem: 10484 kB' 'KReclaimable: 62048 kB' 'Slab: 155588 kB' 'SReclaimable: 62048 kB' 'SUnreclaim: 93540 kB' 'KernelStack: 6312 kB' 'PageTables: 3740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 303788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55064 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.046 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.046 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.047 06:34:20 -- setup/common.sh@33 -- # echo 0 00:04:07.047 06:34:20 -- setup/common.sh@33 -- # return 0 00:04:07.047 06:34:20 -- setup/hugepages.sh@97 -- # anon=0 00:04:07.047 06:34:20 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:07.047 06:34:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.047 06:34:20 -- setup/common.sh@18 -- # local node= 00:04:07.047 06:34:20 -- setup/common.sh@19 -- # local var val 00:04:07.047 06:34:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.047 06:34:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.047 06:34:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.047 06:34:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.047 06:34:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.047 06:34:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8097876 kB' 'MemAvailable: 9479656 kB' 'Buffers: 3704 kB' 'Cached: 1594688 kB' 'SwapCached: 0 kB' 'Active: 453828 kB' 'Inactive: 1260060 kB' 'Active(anon): 125980 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 117184 kB' 'Mapped: 49880 kB' 'Shmem: 10484 kB' 'KReclaimable: 62048 kB' 'Slab: 155592 kB' 'SReclaimable: 62048 kB' 'SUnreclaim: 93544 kB' 'KernelStack: 6328 kB' 'PageTables: 3796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 305668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.047 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.047 06:34:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.048 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.048 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.049 06:34:20 -- setup/common.sh@33 -- # echo 0 00:04:07.049 06:34:20 -- setup/common.sh@33 -- # return 0 00:04:07.049 06:34:20 -- setup/hugepages.sh@99 -- # surp=0 00:04:07.049 06:34:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:07.049 06:34:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.049 06:34:20 -- setup/common.sh@18 -- # local node= 00:04:07.049 06:34:20 -- setup/common.sh@19 -- # local var val 00:04:07.049 06:34:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.049 06:34:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.049 06:34:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.049 06:34:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.049 06:34:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.049 06:34:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8097876 kB' 'MemAvailable: 9479656 kB' 'Buffers: 3704 kB' 'Cached: 1594688 kB' 'SwapCached: 0 kB' 'Active: 453724 kB' 'Inactive: 1260060 kB' 'Active(anon): 125876 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 117104 kB' 'Mapped: 49880 kB' 'Shmem: 10484 kB' 'KReclaimable: 62048 kB' 'Slab: 155576 kB' 'SReclaimable: 62048 kB' 'SUnreclaim: 93528 kB' 'KernelStack: 6296 kB' 'PageTables: 3704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 303788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.049 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.049 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.050 06:34:20 -- setup/common.sh@33 -- # echo 0 00:04:07.050 06:34:20 -- setup/common.sh@33 -- # return 0 00:04:07.050 06:34:20 -- setup/hugepages.sh@100 -- # resv=0 00:04:07.050 nr_hugepages=1024 00:04:07.050 06:34:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:07.050 resv_hugepages=0 00:04:07.050 06:34:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:07.050 surplus_hugepages=0 00:04:07.050 06:34:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:07.050 anon_hugepages=0 00:04:07.050 06:34:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:07.050 06:34:20 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.050 06:34:20 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:07.050 06:34:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:07.050 06:34:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.050 06:34:20 -- setup/common.sh@18 -- # local node= 00:04:07.050 06:34:20 -- setup/common.sh@19 -- # local var val 00:04:07.050 06:34:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.050 06:34:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.050 06:34:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.050 06:34:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.050 06:34:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.050 06:34:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.050 06:34:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8097876 kB' 'MemAvailable: 9479656 kB' 'Buffers: 3704 kB' 'Cached: 1594688 kB' 'SwapCached: 0 kB' 'Active: 453772 kB' 'Inactive: 1260060 kB' 'Active(anon): 125924 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 116884 kB' 'Mapped: 49780 kB' 'Shmem: 10484 kB' 'KReclaimable: 62048 kB' 'Slab: 155576 kB' 'SReclaimable: 62048 kB' 'SUnreclaim: 93528 kB' 'KernelStack: 6304 kB' 'PageTables: 3876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 303788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.050 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.050 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.051 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.051 06:34:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.052 06:34:20 -- setup/common.sh@33 -- # echo 1024 00:04:07.052 06:34:20 -- setup/common.sh@33 -- # return 0 00:04:07.052 06:34:20 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.052 06:34:20 -- setup/hugepages.sh@112 -- # get_nodes 00:04:07.052 06:34:20 -- setup/hugepages.sh@27 -- # local node 00:04:07.052 06:34:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.052 06:34:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:07.052 06:34:20 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:07.052 06:34:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.052 06:34:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.052 06:34:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.052 06:34:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:07.052 06:34:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.052 06:34:20 -- setup/common.sh@18 -- # local node=0 00:04:07.052 06:34:20 -- setup/common.sh@19 -- # local var val 00:04:07.052 06:34:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.052 06:34:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.052 06:34:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.052 06:34:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.052 06:34:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.052 06:34:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.052 06:34:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8097876 kB' 'MemUsed: 4141236 kB' 'SwapCached: 0 kB' 'Active: 453332 kB' 'Inactive: 1260060 kB' 'Active(anon): 125484 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 1598392 kB' 'Mapped: 49780 kB' 'AnonPages: 116700 kB' 'Shmem: 10484 kB' 'KernelStack: 6320 kB' 'PageTables: 3928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62048 kB' 'Slab: 155576 kB' 'SReclaimable: 62048 kB' 'SUnreclaim: 93528 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.052 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.052 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # continue 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.053 06:34:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.053 06:34:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.053 06:34:20 -- setup/common.sh@33 -- # echo 0 00:04:07.053 06:34:20 -- setup/common.sh@33 -- # return 0 00:04:07.053 06:34:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.053 06:34:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.053 06:34:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.053 06:34:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.053 node0=1024 expecting 1024 00:04:07.053 06:34:20 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:07.053 06:34:20 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:07.053 06:34:20 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:07.053 06:34:20 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:07.053 06:34:20 -- setup/hugepages.sh@202 -- # setup output 00:04:07.053 06:34:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.053 06:34:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:07.312 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:07.575 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:07.575 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:07.575 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:07.575 06:34:21 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:07.575 06:34:21 -- setup/hugepages.sh@89 -- # local node 00:04:07.575 06:34:21 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:07.575 06:34:21 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:07.575 06:34:21 -- setup/hugepages.sh@92 -- # local surp 00:04:07.575 06:34:21 -- setup/hugepages.sh@93 -- # local resv 00:04:07.575 06:34:21 -- setup/hugepages.sh@94 -- # local anon 00:04:07.575 06:34:21 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:07.575 06:34:21 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:07.575 06:34:21 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:07.575 06:34:21 -- setup/common.sh@18 -- # local node= 00:04:07.575 06:34:21 -- setup/common.sh@19 -- # local var val 00:04:07.575 06:34:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.575 06:34:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.575 06:34:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.575 06:34:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.575 06:34:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.575 06:34:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.575 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.575 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.575 06:34:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8095864 kB' 'MemAvailable: 9477644 kB' 'Buffers: 3704 kB' 'Cached: 1594688 kB' 'SwapCached: 0 kB' 'Active: 454132 kB' 'Inactive: 1260060 kB' 'Active(anon): 126284 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 117428 kB' 'Mapped: 50016 kB' 'Shmem: 10484 kB' 'KReclaimable: 62048 kB' 'Slab: 155564 kB' 'SReclaimable: 62048 kB' 'SUnreclaim: 93516 kB' 'KernelStack: 6312 kB' 'PageTables: 3832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 303788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55064 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:04:07.575 06:34:21 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.575 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.575 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.575 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.575 06:34:21 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.575 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.575 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.575 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.575 06:34:21 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.575 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.575 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.576 06:34:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.576 06:34:21 -- setup/common.sh@33 -- # echo 0 00:04:07.576 06:34:21 -- setup/common.sh@33 -- # return 0 00:04:07.576 06:34:21 -- setup/hugepages.sh@97 -- # anon=0 00:04:07.576 06:34:21 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:07.576 06:34:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.576 06:34:21 -- setup/common.sh@18 -- # local node= 00:04:07.576 06:34:21 -- setup/common.sh@19 -- # local var val 00:04:07.576 06:34:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.576 06:34:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.576 06:34:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.576 06:34:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.576 06:34:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.576 06:34:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.576 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8095864 kB' 'MemAvailable: 9477644 kB' 'Buffers: 3704 kB' 'Cached: 1594688 kB' 'SwapCached: 0 kB' 'Active: 453828 kB' 'Inactive: 1260060 kB' 'Active(anon): 125980 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 117032 kB' 'Mapped: 49836 kB' 'Shmem: 10484 kB' 'KReclaimable: 62048 kB' 'Slab: 155560 kB' 'SReclaimable: 62048 kB' 'SUnreclaim: 93512 kB' 'KernelStack: 6288 kB' 'PageTables: 3828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 303788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.577 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.577 06:34:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.578 06:34:21 -- setup/common.sh@33 -- # echo 0 00:04:07.578 06:34:21 -- setup/common.sh@33 -- # return 0 00:04:07.578 06:34:21 -- setup/hugepages.sh@99 -- # surp=0 00:04:07.578 06:34:21 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:07.578 06:34:21 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.578 06:34:21 -- setup/common.sh@18 -- # local node= 00:04:07.578 06:34:21 -- setup/common.sh@19 -- # local var val 00:04:07.578 06:34:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.578 06:34:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.578 06:34:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.578 06:34:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.578 06:34:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.578 06:34:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8095864 kB' 'MemAvailable: 9477644 kB' 'Buffers: 3704 kB' 'Cached: 1594688 kB' 'SwapCached: 0 kB' 'Active: 453660 kB' 'Inactive: 1260060 kB' 'Active(anon): 125812 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 116904 kB' 'Mapped: 49780 kB' 'Shmem: 10484 kB' 'KReclaimable: 62048 kB' 'Slab: 155560 kB' 'SReclaimable: 62048 kB' 'SUnreclaim: 93512 kB' 'KernelStack: 6320 kB' 'PageTables: 3920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 303788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.578 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.578 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.579 06:34:21 -- setup/common.sh@33 -- # echo 0 00:04:07.579 06:34:21 -- setup/common.sh@33 -- # return 0 00:04:07.579 06:34:21 -- setup/hugepages.sh@100 -- # resv=0 00:04:07.579 nr_hugepages=1024 00:04:07.579 06:34:21 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:07.579 resv_hugepages=0 00:04:07.579 06:34:21 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:07.579 surplus_hugepages=0 00:04:07.579 06:34:21 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:07.579 anon_hugepages=0 00:04:07.579 06:34:21 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:07.579 06:34:21 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.579 06:34:21 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:07.579 06:34:21 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:07.579 06:34:21 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.579 06:34:21 -- setup/common.sh@18 -- # local node= 00:04:07.579 06:34:21 -- setup/common.sh@19 -- # local var val 00:04:07.579 06:34:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.579 06:34:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.579 06:34:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.579 06:34:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.579 06:34:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.579 06:34:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.579 06:34:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8095864 kB' 'MemAvailable: 9477644 kB' 'Buffers: 3704 kB' 'Cached: 1594688 kB' 'SwapCached: 0 kB' 'Active: 453676 kB' 'Inactive: 1260060 kB' 'Active(anon): 125828 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 116924 kB' 'Mapped: 49780 kB' 'Shmem: 10484 kB' 'KReclaimable: 62048 kB' 'Slab: 155560 kB' 'SReclaimable: 62048 kB' 'SUnreclaim: 93512 kB' 'KernelStack: 6288 kB' 'PageTables: 3828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 303788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.579 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.579 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.580 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.580 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.581 06:34:21 -- setup/common.sh@33 -- # echo 1024 00:04:07.581 06:34:21 -- setup/common.sh@33 -- # return 0 00:04:07.581 06:34:21 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.581 06:34:21 -- setup/hugepages.sh@112 -- # get_nodes 00:04:07.581 06:34:21 -- setup/hugepages.sh@27 -- # local node 00:04:07.581 06:34:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.581 06:34:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:07.581 06:34:21 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:07.581 06:34:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.581 06:34:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.581 06:34:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.581 06:34:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:07.581 06:34:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.581 06:34:21 -- setup/common.sh@18 -- # local node=0 00:04:07.581 06:34:21 -- setup/common.sh@19 -- # local var val 00:04:07.581 06:34:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.581 06:34:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.581 06:34:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.581 06:34:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.581 06:34:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.581 06:34:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 8095864 kB' 'MemUsed: 4143248 kB' 'SwapCached: 0 kB' 'Active: 453604 kB' 'Inactive: 1260060 kB' 'Active(anon): 125756 kB' 'Inactive(anon): 0 kB' 'Active(file): 327848 kB' 'Inactive(file): 1260060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'FilePages: 1598392 kB' 'Mapped: 49780 kB' 'AnonPages: 116848 kB' 'Shmem: 10484 kB' 'KernelStack: 6272 kB' 'PageTables: 3776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62048 kB' 'Slab: 155560 kB' 'SReclaimable: 62048 kB' 'SUnreclaim: 93512 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.581 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.581 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.582 06:34:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.582 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.582 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.582 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.582 06:34:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.582 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.582 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.582 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.582 06:34:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.582 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.582 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.582 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.582 06:34:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.582 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.582 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.582 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.582 06:34:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.582 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.582 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.582 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.582 06:34:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.582 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.582 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.582 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.582 06:34:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.582 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.582 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.582 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.582 06:34:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.582 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.582 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.582 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.582 06:34:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.582 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.582 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.582 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.582 06:34:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.582 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.582 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.582 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.582 06:34:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.582 06:34:21 -- setup/common.sh@32 -- # continue 00:04:07.582 06:34:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.582 06:34:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.582 06:34:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.582 06:34:21 -- setup/common.sh@33 -- # echo 0 00:04:07.582 06:34:21 -- setup/common.sh@33 -- # return 0 00:04:07.582 06:34:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.582 06:34:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.582 06:34:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.582 06:34:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.582 node0=1024 expecting 1024 00:04:07.582 06:34:21 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:07.582 06:34:21 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:07.582 00:04:07.582 real 0m1.084s 00:04:07.582 user 0m0.546s 00:04:07.582 sys 0m0.599s 00:04:07.582 06:34:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:07.582 06:34:21 -- common/autotest_common.sh@10 -- # set +x 00:04:07.582 ************************************ 00:04:07.582 END TEST no_shrink_alloc 00:04:07.582 ************************************ 00:04:07.582 06:34:21 -- setup/hugepages.sh@217 -- # clear_hp 00:04:07.582 06:34:21 -- setup/hugepages.sh@37 -- # local node hp 00:04:07.582 06:34:21 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:07.582 06:34:21 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.582 06:34:21 -- setup/hugepages.sh@41 -- # echo 0 00:04:07.582 06:34:21 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.582 06:34:21 -- setup/hugepages.sh@41 -- # echo 0 00:04:07.582 06:34:21 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:07.582 06:34:21 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:07.582 00:04:07.582 real 0m5.170s 00:04:07.582 user 0m2.458s 00:04:07.582 sys 0m2.610s 00:04:07.582 06:34:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:07.582 06:34:21 -- common/autotest_common.sh@10 -- # set +x 00:04:07.582 ************************************ 00:04:07.582 END TEST hugepages 00:04:07.582 ************************************ 00:04:07.842 06:34:21 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:07.842 06:34:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:07.842 06:34:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:07.842 06:34:21 -- common/autotest_common.sh@10 -- # set +x 00:04:07.842 ************************************ 00:04:07.842 START TEST driver 00:04:07.842 ************************************ 00:04:07.842 06:34:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:07.842 * Looking for test storage... 00:04:07.842 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:07.842 06:34:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:07.842 06:34:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:07.842 06:34:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:07.842 06:34:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:07.842 06:34:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:07.842 06:34:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:07.842 06:34:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:07.842 06:34:21 -- scripts/common.sh@335 -- # IFS=.-: 00:04:07.842 06:34:21 -- scripts/common.sh@335 -- # read -ra ver1 00:04:07.842 06:34:21 -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.842 06:34:21 -- scripts/common.sh@336 -- # read -ra ver2 00:04:07.842 06:34:21 -- scripts/common.sh@337 -- # local 'op=<' 00:04:07.842 06:34:21 -- scripts/common.sh@339 -- # ver1_l=2 00:04:07.842 06:34:21 -- scripts/common.sh@340 -- # ver2_l=1 00:04:07.842 06:34:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:07.842 06:34:21 -- scripts/common.sh@343 -- # case "$op" in 00:04:07.842 06:34:21 -- scripts/common.sh@344 -- # : 1 00:04:07.842 06:34:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:07.842 06:34:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.842 06:34:21 -- scripts/common.sh@364 -- # decimal 1 00:04:07.842 06:34:21 -- scripts/common.sh@352 -- # local d=1 00:04:07.842 06:34:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.842 06:34:21 -- scripts/common.sh@354 -- # echo 1 00:04:07.842 06:34:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:07.842 06:34:21 -- scripts/common.sh@365 -- # decimal 2 00:04:07.842 06:34:21 -- scripts/common.sh@352 -- # local d=2 00:04:07.842 06:34:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.842 06:34:21 -- scripts/common.sh@354 -- # echo 2 00:04:07.842 06:34:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:07.842 06:34:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:07.842 06:34:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:07.842 06:34:21 -- scripts/common.sh@367 -- # return 0 00:04:07.842 06:34:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.842 06:34:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:07.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.842 --rc genhtml_branch_coverage=1 00:04:07.842 --rc genhtml_function_coverage=1 00:04:07.842 --rc genhtml_legend=1 00:04:07.842 --rc geninfo_all_blocks=1 00:04:07.842 --rc geninfo_unexecuted_blocks=1 00:04:07.842 00:04:07.842 ' 00:04:07.842 06:34:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:07.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.842 --rc genhtml_branch_coverage=1 00:04:07.842 --rc genhtml_function_coverage=1 00:04:07.842 --rc genhtml_legend=1 00:04:07.842 --rc geninfo_all_blocks=1 00:04:07.842 --rc geninfo_unexecuted_blocks=1 00:04:07.842 00:04:07.842 ' 00:04:07.842 06:34:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:07.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.842 --rc genhtml_branch_coverage=1 00:04:07.842 --rc genhtml_function_coverage=1 00:04:07.842 --rc genhtml_legend=1 00:04:07.842 --rc geninfo_all_blocks=1 00:04:07.842 --rc geninfo_unexecuted_blocks=1 00:04:07.842 00:04:07.842 ' 00:04:07.842 06:34:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:07.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.842 --rc genhtml_branch_coverage=1 00:04:07.842 --rc genhtml_function_coverage=1 00:04:07.842 --rc genhtml_legend=1 00:04:07.842 --rc geninfo_all_blocks=1 00:04:07.842 --rc geninfo_unexecuted_blocks=1 00:04:07.842 00:04:07.842 ' 00:04:07.842 06:34:21 -- setup/driver.sh@68 -- # setup reset 00:04:07.842 06:34:21 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:07.842 06:34:21 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:08.410 06:34:22 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:08.410 06:34:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:08.410 06:34:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:08.410 06:34:22 -- common/autotest_common.sh@10 -- # set +x 00:04:08.410 ************************************ 00:04:08.410 START TEST guess_driver 00:04:08.410 ************************************ 00:04:08.410 06:34:22 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:08.410 06:34:22 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:08.410 06:34:22 -- setup/driver.sh@47 -- # local fail=0 00:04:08.410 06:34:22 -- setup/driver.sh@49 -- # pick_driver 00:04:08.410 06:34:22 -- setup/driver.sh@36 -- # vfio 00:04:08.410 06:34:22 -- setup/driver.sh@21 -- # local iommu_grups 00:04:08.410 06:34:22 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:08.410 06:34:22 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:08.410 06:34:22 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:08.410 06:34:22 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:08.410 06:34:22 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:08.411 06:34:22 -- setup/driver.sh@32 -- # return 1 00:04:08.411 06:34:22 -- setup/driver.sh@38 -- # uio 00:04:08.411 06:34:22 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:08.411 06:34:22 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:08.411 06:34:22 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:08.411 06:34:22 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:08.411 06:34:22 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:08.411 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:08.411 06:34:22 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:08.411 06:34:22 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:08.411 06:34:22 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:08.411 Looking for driver=uio_pci_generic 00:04:08.411 06:34:22 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:08.411 06:34:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.411 06:34:22 -- setup/driver.sh@45 -- # setup output config 00:04:08.411 06:34:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.411 06:34:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:09.349 06:34:22 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:09.349 06:34:22 -- setup/driver.sh@58 -- # continue 00:04:09.349 06:34:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.349 06:34:23 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.349 06:34:23 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:09.349 06:34:23 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.349 06:34:23 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:09.349 06:34:23 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:09.349 06:34:23 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:09.349 06:34:23 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:09.349 06:34:23 -- setup/driver.sh@65 -- # setup reset 00:04:09.349 06:34:23 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:09.349 06:34:23 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:09.917 00:04:09.917 real 0m1.437s 00:04:09.917 user 0m0.545s 00:04:09.917 sys 0m0.902s 00:04:09.917 06:34:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:09.917 06:34:23 -- common/autotest_common.sh@10 -- # set +x 00:04:09.917 ************************************ 00:04:09.917 END TEST guess_driver 00:04:09.917 ************************************ 00:04:09.917 00:04:09.917 real 0m2.221s 00:04:09.917 user 0m0.880s 00:04:09.917 sys 0m1.421s 00:04:09.917 06:34:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:09.917 06:34:23 -- common/autotest_common.sh@10 -- # set +x 00:04:09.917 ************************************ 00:04:09.917 END TEST driver 00:04:09.917 ************************************ 00:04:09.917 06:34:23 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:09.917 06:34:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:09.917 06:34:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:09.917 06:34:23 -- common/autotest_common.sh@10 -- # set +x 00:04:09.917 ************************************ 00:04:09.917 START TEST devices 00:04:09.917 ************************************ 00:04:09.917 06:34:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:10.176 * Looking for test storage... 00:04:10.176 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:10.176 06:34:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:10.176 06:34:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:10.176 06:34:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:10.176 06:34:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:10.176 06:34:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:10.176 06:34:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:10.176 06:34:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:10.176 06:34:24 -- scripts/common.sh@335 -- # IFS=.-: 00:04:10.176 06:34:24 -- scripts/common.sh@335 -- # read -ra ver1 00:04:10.176 06:34:24 -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.176 06:34:24 -- scripts/common.sh@336 -- # read -ra ver2 00:04:10.176 06:34:24 -- scripts/common.sh@337 -- # local 'op=<' 00:04:10.176 06:34:24 -- scripts/common.sh@339 -- # ver1_l=2 00:04:10.176 06:34:24 -- scripts/common.sh@340 -- # ver2_l=1 00:04:10.176 06:34:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:10.176 06:34:24 -- scripts/common.sh@343 -- # case "$op" in 00:04:10.176 06:34:24 -- scripts/common.sh@344 -- # : 1 00:04:10.176 06:34:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:10.176 06:34:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.176 06:34:24 -- scripts/common.sh@364 -- # decimal 1 00:04:10.176 06:34:24 -- scripts/common.sh@352 -- # local d=1 00:04:10.176 06:34:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.176 06:34:24 -- scripts/common.sh@354 -- # echo 1 00:04:10.176 06:34:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:10.177 06:34:24 -- scripts/common.sh@365 -- # decimal 2 00:04:10.177 06:34:24 -- scripts/common.sh@352 -- # local d=2 00:04:10.177 06:34:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.177 06:34:24 -- scripts/common.sh@354 -- # echo 2 00:04:10.177 06:34:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:10.177 06:34:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:10.177 06:34:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:10.177 06:34:24 -- scripts/common.sh@367 -- # return 0 00:04:10.177 06:34:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.177 06:34:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:10.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.177 --rc genhtml_branch_coverage=1 00:04:10.177 --rc genhtml_function_coverage=1 00:04:10.177 --rc genhtml_legend=1 00:04:10.177 --rc geninfo_all_blocks=1 00:04:10.177 --rc geninfo_unexecuted_blocks=1 00:04:10.177 00:04:10.177 ' 00:04:10.177 06:34:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:10.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.177 --rc genhtml_branch_coverage=1 00:04:10.177 --rc genhtml_function_coverage=1 00:04:10.177 --rc genhtml_legend=1 00:04:10.177 --rc geninfo_all_blocks=1 00:04:10.177 --rc geninfo_unexecuted_blocks=1 00:04:10.177 00:04:10.177 ' 00:04:10.177 06:34:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:10.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.177 --rc genhtml_branch_coverage=1 00:04:10.177 --rc genhtml_function_coverage=1 00:04:10.177 --rc genhtml_legend=1 00:04:10.177 --rc geninfo_all_blocks=1 00:04:10.177 --rc geninfo_unexecuted_blocks=1 00:04:10.177 00:04:10.177 ' 00:04:10.177 06:34:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:10.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.177 --rc genhtml_branch_coverage=1 00:04:10.177 --rc genhtml_function_coverage=1 00:04:10.177 --rc genhtml_legend=1 00:04:10.177 --rc geninfo_all_blocks=1 00:04:10.177 --rc geninfo_unexecuted_blocks=1 00:04:10.177 00:04:10.177 ' 00:04:10.177 06:34:24 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:10.177 06:34:24 -- setup/devices.sh@192 -- # setup reset 00:04:10.177 06:34:24 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:10.177 06:34:24 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:11.114 06:34:24 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:11.114 06:34:24 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:11.114 06:34:24 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:11.114 06:34:24 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:11.114 06:34:24 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:11.114 06:34:24 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:11.114 06:34:24 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:11.114 06:34:24 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:11.114 06:34:24 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:11.114 06:34:24 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:11.114 06:34:24 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:11.114 06:34:24 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:11.114 06:34:24 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:11.114 06:34:24 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:11.114 06:34:24 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:11.114 06:34:24 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:11.114 06:34:24 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:11.114 06:34:24 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:11.114 06:34:24 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:11.114 06:34:24 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:11.114 06:34:24 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:11.114 06:34:24 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:11.114 06:34:24 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:11.114 06:34:24 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:11.114 06:34:24 -- setup/devices.sh@196 -- # blocks=() 00:04:11.114 06:34:24 -- setup/devices.sh@196 -- # declare -a blocks 00:04:11.114 06:34:24 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:11.114 06:34:24 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:11.114 06:34:24 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:11.114 06:34:24 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:11.114 06:34:24 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:11.115 06:34:24 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:11.115 06:34:24 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:11.115 06:34:24 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:11.115 06:34:24 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:11.115 06:34:24 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:11.115 06:34:24 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:11.115 No valid GPT data, bailing 00:04:11.115 06:34:24 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:11.115 06:34:24 -- scripts/common.sh@393 -- # pt= 00:04:11.115 06:34:24 -- scripts/common.sh@394 -- # return 1 00:04:11.115 06:34:24 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:11.115 06:34:24 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:11.115 06:34:24 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:11.115 06:34:24 -- setup/common.sh@80 -- # echo 5368709120 00:04:11.115 06:34:24 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:11.115 06:34:24 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:11.115 06:34:24 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:11.115 06:34:24 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:11.115 06:34:24 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:11.115 06:34:24 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:11.115 06:34:24 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:11.115 06:34:24 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:11.115 06:34:24 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:11.115 06:34:24 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:04:11.115 06:34:24 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:11.115 No valid GPT data, bailing 00:04:11.115 06:34:24 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:11.115 06:34:24 -- scripts/common.sh@393 -- # pt= 00:04:11.115 06:34:24 -- scripts/common.sh@394 -- # return 1 00:04:11.115 06:34:24 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:11.115 06:34:24 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:11.115 06:34:24 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:11.115 06:34:24 -- setup/common.sh@80 -- # echo 4294967296 00:04:11.115 06:34:24 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:11.115 06:34:24 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:11.115 06:34:24 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:11.115 06:34:24 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:11.115 06:34:24 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:04:11.115 06:34:24 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:11.115 06:34:24 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:11.115 06:34:24 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:11.115 06:34:24 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:04:11.115 06:34:24 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:04:11.115 06:34:24 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:04:11.115 No valid GPT data, bailing 00:04:11.115 06:34:25 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:11.115 06:34:25 -- scripts/common.sh@393 -- # pt= 00:04:11.115 06:34:25 -- scripts/common.sh@394 -- # return 1 00:04:11.115 06:34:25 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:04:11.115 06:34:25 -- setup/common.sh@76 -- # local dev=nvme1n2 00:04:11.115 06:34:25 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:04:11.115 06:34:25 -- setup/common.sh@80 -- # echo 4294967296 00:04:11.115 06:34:25 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:11.115 06:34:25 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:11.115 06:34:25 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:11.115 06:34:25 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:11.115 06:34:25 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:04:11.115 06:34:25 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:11.115 06:34:25 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:11.115 06:34:25 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:11.115 06:34:25 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:04:11.115 06:34:25 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:04:11.115 06:34:25 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:04:11.115 No valid GPT data, bailing 00:04:11.374 06:34:25 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:11.374 06:34:25 -- scripts/common.sh@393 -- # pt= 00:04:11.374 06:34:25 -- scripts/common.sh@394 -- # return 1 00:04:11.374 06:34:25 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:04:11.374 06:34:25 -- setup/common.sh@76 -- # local dev=nvme1n3 00:04:11.374 06:34:25 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:04:11.374 06:34:25 -- setup/common.sh@80 -- # echo 4294967296 00:04:11.374 06:34:25 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:11.374 06:34:25 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:11.374 06:34:25 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:11.374 06:34:25 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:11.374 06:34:25 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:11.374 06:34:25 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:11.374 06:34:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.374 06:34:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.374 06:34:25 -- common/autotest_common.sh@10 -- # set +x 00:04:11.374 ************************************ 00:04:11.374 START TEST nvme_mount 00:04:11.374 ************************************ 00:04:11.374 06:34:25 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:11.374 06:34:25 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:11.374 06:34:25 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:11.374 06:34:25 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:11.374 06:34:25 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:11.374 06:34:25 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:11.374 06:34:25 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:11.374 06:34:25 -- setup/common.sh@40 -- # local part_no=1 00:04:11.374 06:34:25 -- setup/common.sh@41 -- # local size=1073741824 00:04:11.374 06:34:25 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:11.374 06:34:25 -- setup/common.sh@44 -- # parts=() 00:04:11.374 06:34:25 -- setup/common.sh@44 -- # local parts 00:04:11.374 06:34:25 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:11.374 06:34:25 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:11.374 06:34:25 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:11.374 06:34:25 -- setup/common.sh@46 -- # (( part++ )) 00:04:11.374 06:34:25 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:11.374 06:34:25 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:11.374 06:34:25 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:11.374 06:34:25 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:12.311 Creating new GPT entries in memory. 00:04:12.311 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:12.311 other utilities. 00:04:12.311 06:34:26 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:12.311 06:34:26 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:12.311 06:34:26 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:12.311 06:34:26 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:12.311 06:34:26 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:13.248 Creating new GPT entries in memory. 00:04:13.248 The operation has completed successfully. 00:04:13.248 06:34:27 -- setup/common.sh@57 -- # (( part++ )) 00:04:13.248 06:34:27 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:13.248 06:34:27 -- setup/common.sh@62 -- # wait 52097 00:04:13.248 06:34:27 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:13.248 06:34:27 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:13.248 06:34:27 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:13.248 06:34:27 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:13.248 06:34:27 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:13.507 06:34:27 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:13.507 06:34:27 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:13.507 06:34:27 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:13.507 06:34:27 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:13.507 06:34:27 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:13.507 06:34:27 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:13.507 06:34:27 -- setup/devices.sh@53 -- # local found=0 00:04:13.507 06:34:27 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:13.507 06:34:27 -- setup/devices.sh@56 -- # : 00:04:13.507 06:34:27 -- setup/devices.sh@59 -- # local pci status 00:04:13.507 06:34:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.507 06:34:27 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:13.507 06:34:27 -- setup/devices.sh@47 -- # setup output config 00:04:13.507 06:34:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.507 06:34:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:13.507 06:34:27 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:13.507 06:34:27 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:13.507 06:34:27 -- setup/devices.sh@63 -- # found=1 00:04:13.507 06:34:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.507 06:34:27 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:13.507 06:34:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.075 06:34:27 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:14.075 06:34:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.075 06:34:27 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:14.075 06:34:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.075 06:34:27 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:14.075 06:34:27 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:14.075 06:34:27 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:14.075 06:34:27 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:14.075 06:34:27 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:14.075 06:34:27 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:14.075 06:34:27 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:14.075 06:34:27 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:14.075 06:34:27 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:14.075 06:34:27 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:14.075 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:14.075 06:34:27 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:14.075 06:34:27 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:14.334 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:14.334 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:14.334 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:14.334 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:14.334 06:34:28 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:14.334 06:34:28 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:14.334 06:34:28 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:14.334 06:34:28 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:14.334 06:34:28 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:14.334 06:34:28 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:14.334 06:34:28 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:14.334 06:34:28 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:14.334 06:34:28 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:14.334 06:34:28 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:14.334 06:34:28 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:14.334 06:34:28 -- setup/devices.sh@53 -- # local found=0 00:04:14.334 06:34:28 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:14.334 06:34:28 -- setup/devices.sh@56 -- # : 00:04:14.334 06:34:28 -- setup/devices.sh@59 -- # local pci status 00:04:14.334 06:34:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.334 06:34:28 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:14.334 06:34:28 -- setup/devices.sh@47 -- # setup output config 00:04:14.334 06:34:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.334 06:34:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:14.607 06:34:28 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:14.607 06:34:28 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:14.607 06:34:28 -- setup/devices.sh@63 -- # found=1 00:04:14.607 06:34:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.607 06:34:28 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:14.607 06:34:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.878 06:34:28 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:14.878 06:34:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.878 06:34:28 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:14.878 06:34:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.137 06:34:28 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:15.137 06:34:28 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:15.137 06:34:28 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:15.137 06:34:28 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:15.137 06:34:28 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:15.137 06:34:28 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:15.137 06:34:28 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:04:15.137 06:34:28 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:15.137 06:34:28 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:15.137 06:34:28 -- setup/devices.sh@50 -- # local mount_point= 00:04:15.137 06:34:28 -- setup/devices.sh@51 -- # local test_file= 00:04:15.137 06:34:28 -- setup/devices.sh@53 -- # local found=0 00:04:15.137 06:34:28 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:15.137 06:34:28 -- setup/devices.sh@59 -- # local pci status 00:04:15.137 06:34:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.137 06:34:28 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:15.137 06:34:28 -- setup/devices.sh@47 -- # setup output config 00:04:15.137 06:34:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.137 06:34:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:15.397 06:34:29 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:15.397 06:34:29 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:15.397 06:34:29 -- setup/devices.sh@63 -- # found=1 00:04:15.397 06:34:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.397 06:34:29 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:15.397 06:34:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.656 06:34:29 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:15.656 06:34:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.656 06:34:29 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:15.656 06:34:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.915 06:34:29 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:15.915 06:34:29 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:15.915 06:34:29 -- setup/devices.sh@68 -- # return 0 00:04:15.915 06:34:29 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:15.915 06:34:29 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:15.915 06:34:29 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:15.915 06:34:29 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:15.915 06:34:29 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:15.915 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:15.915 00:04:15.915 real 0m4.550s 00:04:15.915 user 0m1.046s 00:04:15.915 sys 0m1.204s 00:04:15.915 06:34:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:15.915 06:34:29 -- common/autotest_common.sh@10 -- # set +x 00:04:15.915 ************************************ 00:04:15.915 END TEST nvme_mount 00:04:15.915 ************************************ 00:04:15.915 06:34:29 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:15.915 06:34:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:15.915 06:34:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:15.915 06:34:29 -- common/autotest_common.sh@10 -- # set +x 00:04:15.915 ************************************ 00:04:15.915 START TEST dm_mount 00:04:15.915 ************************************ 00:04:15.915 06:34:29 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:15.915 06:34:29 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:15.915 06:34:29 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:15.915 06:34:29 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:15.915 06:34:29 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:15.915 06:34:29 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:15.915 06:34:29 -- setup/common.sh@40 -- # local part_no=2 00:04:15.915 06:34:29 -- setup/common.sh@41 -- # local size=1073741824 00:04:15.915 06:34:29 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:15.915 06:34:29 -- setup/common.sh@44 -- # parts=() 00:04:15.915 06:34:29 -- setup/common.sh@44 -- # local parts 00:04:15.915 06:34:29 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:15.915 06:34:29 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:15.915 06:34:29 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:15.915 06:34:29 -- setup/common.sh@46 -- # (( part++ )) 00:04:15.915 06:34:29 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:15.915 06:34:29 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:15.915 06:34:29 -- setup/common.sh@46 -- # (( part++ )) 00:04:15.915 06:34:29 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:15.915 06:34:29 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:15.915 06:34:29 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:15.915 06:34:29 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:16.851 Creating new GPT entries in memory. 00:04:16.851 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:16.851 other utilities. 00:04:16.851 06:34:30 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:16.851 06:34:30 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:16.851 06:34:30 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:16.851 06:34:30 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:16.851 06:34:30 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:18.229 Creating new GPT entries in memory. 00:04:18.229 The operation has completed successfully. 00:04:18.229 06:34:31 -- setup/common.sh@57 -- # (( part++ )) 00:04:18.229 06:34:31 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:18.229 06:34:31 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:18.229 06:34:31 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:18.229 06:34:31 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:19.168 The operation has completed successfully. 00:04:19.168 06:34:32 -- setup/common.sh@57 -- # (( part++ )) 00:04:19.168 06:34:32 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:19.168 06:34:32 -- setup/common.sh@62 -- # wait 52556 00:04:19.168 06:34:32 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:19.168 06:34:32 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:19.168 06:34:32 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:19.168 06:34:32 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:19.168 06:34:32 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:19.168 06:34:32 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:19.168 06:34:32 -- setup/devices.sh@161 -- # break 00:04:19.168 06:34:32 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:19.168 06:34:32 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:19.168 06:34:32 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:19.168 06:34:32 -- setup/devices.sh@166 -- # dm=dm-0 00:04:19.168 06:34:32 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:19.168 06:34:32 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:19.168 06:34:32 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:19.168 06:34:32 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:19.168 06:34:32 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:19.168 06:34:32 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:19.168 06:34:32 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:19.168 06:34:32 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:19.168 06:34:32 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:19.168 06:34:32 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:19.168 06:34:32 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:19.168 06:34:32 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:19.168 06:34:32 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:19.168 06:34:32 -- setup/devices.sh@53 -- # local found=0 00:04:19.168 06:34:32 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:19.168 06:34:32 -- setup/devices.sh@56 -- # : 00:04:19.168 06:34:32 -- setup/devices.sh@59 -- # local pci status 00:04:19.168 06:34:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.168 06:34:32 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:19.168 06:34:32 -- setup/devices.sh@47 -- # setup output config 00:04:19.168 06:34:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.168 06:34:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:19.168 06:34:33 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:19.168 06:34:33 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:19.168 06:34:33 -- setup/devices.sh@63 -- # found=1 00:04:19.168 06:34:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.168 06:34:33 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:19.168 06:34:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.735 06:34:33 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:19.735 06:34:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.735 06:34:33 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:19.735 06:34:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.735 06:34:33 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:19.735 06:34:33 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:19.735 06:34:33 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:19.735 06:34:33 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:19.735 06:34:33 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:19.735 06:34:33 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:19.735 06:34:33 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:19.735 06:34:33 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:19.735 06:34:33 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:19.735 06:34:33 -- setup/devices.sh@50 -- # local mount_point= 00:04:19.735 06:34:33 -- setup/devices.sh@51 -- # local test_file= 00:04:19.735 06:34:33 -- setup/devices.sh@53 -- # local found=0 00:04:19.735 06:34:33 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:19.735 06:34:33 -- setup/devices.sh@59 -- # local pci status 00:04:19.735 06:34:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.735 06:34:33 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:19.735 06:34:33 -- setup/devices.sh@47 -- # setup output config 00:04:19.735 06:34:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.735 06:34:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:19.994 06:34:33 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:19.994 06:34:33 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:19.994 06:34:33 -- setup/devices.sh@63 -- # found=1 00:04:19.994 06:34:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.994 06:34:33 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:19.994 06:34:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.253 06:34:34 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:20.253 06:34:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.253 06:34:34 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:20.253 06:34:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.513 06:34:34 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:20.513 06:34:34 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:20.513 06:34:34 -- setup/devices.sh@68 -- # return 0 00:04:20.513 06:34:34 -- setup/devices.sh@187 -- # cleanup_dm 00:04:20.513 06:34:34 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:20.513 06:34:34 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:20.513 06:34:34 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:20.513 06:34:34 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:20.513 06:34:34 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:20.513 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:20.513 06:34:34 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:20.513 06:34:34 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:20.513 00:04:20.513 real 0m4.625s 00:04:20.513 user 0m0.720s 00:04:20.513 sys 0m0.824s 00:04:20.513 06:34:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:20.513 06:34:34 -- common/autotest_common.sh@10 -- # set +x 00:04:20.513 ************************************ 00:04:20.513 END TEST dm_mount 00:04:20.513 ************************************ 00:04:20.513 06:34:34 -- setup/devices.sh@1 -- # cleanup 00:04:20.513 06:34:34 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:20.513 06:34:34 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:20.513 06:34:34 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:20.513 06:34:34 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:20.513 06:34:34 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:20.513 06:34:34 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:20.772 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:20.772 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:20.772 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:20.772 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:20.772 06:34:34 -- setup/devices.sh@12 -- # cleanup_dm 00:04:20.772 06:34:34 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:20.772 06:34:34 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:20.772 06:34:34 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:20.772 06:34:34 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:20.772 06:34:34 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:20.773 06:34:34 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:20.773 00:04:20.773 real 0m10.837s 00:04:20.773 user 0m2.515s 00:04:20.773 sys 0m2.657s 00:04:20.773 ************************************ 00:04:20.773 END TEST devices 00:04:20.773 ************************************ 00:04:20.773 06:34:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:20.773 06:34:34 -- common/autotest_common.sh@10 -- # set +x 00:04:20.773 00:04:20.773 real 0m23.070s 00:04:20.773 user 0m7.978s 00:04:20.773 sys 0m9.367s 00:04:20.773 06:34:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:20.773 06:34:34 -- common/autotest_common.sh@10 -- # set +x 00:04:20.773 ************************************ 00:04:20.773 END TEST setup.sh 00:04:20.773 ************************************ 00:04:21.031 06:34:34 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:21.031 Hugepages 00:04:21.031 node hugesize free / total 00:04:21.031 node0 1048576kB 0 / 0 00:04:21.031 node0 2048kB 2048 / 2048 00:04:21.031 00:04:21.031 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:21.031 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:21.290 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:21.290 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:21.290 06:34:35 -- spdk/autotest.sh@128 -- # uname -s 00:04:21.290 06:34:35 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:04:21.290 06:34:35 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:04:21.290 06:34:35 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:21.856 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:22.115 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.115 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.115 06:34:36 -- common/autotest_common.sh@1527 -- # sleep 1 00:04:23.509 06:34:37 -- common/autotest_common.sh@1528 -- # bdfs=() 00:04:23.509 06:34:37 -- common/autotest_common.sh@1528 -- # local bdfs 00:04:23.509 06:34:37 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:04:23.509 06:34:37 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:04:23.509 06:34:37 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:23.509 06:34:37 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:23.509 06:34:37 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:23.509 06:34:37 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:23.509 06:34:37 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:23.509 06:34:37 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:04:23.509 06:34:37 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:04:23.509 06:34:37 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:23.509 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:23.768 Waiting for block devices as requested 00:04:23.768 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:23.768 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:04:23.769 06:34:37 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:23.769 06:34:37 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:04:23.769 06:34:37 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:23.769 06:34:37 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:04:23.769 06:34:37 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:23.769 06:34:37 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:04:23.769 06:34:37 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:23.769 06:34:37 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:04:23.769 06:34:37 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:04:23.769 06:34:37 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:04:23.769 06:34:37 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:23.769 06:34:37 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:23.769 06:34:37 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:23.769 06:34:37 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:23.769 06:34:37 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:23.769 06:34:37 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:23.769 06:34:37 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:04:23.769 06:34:37 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:23.769 06:34:37 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:23.769 06:34:37 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:23.769 06:34:37 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:23.769 06:34:37 -- common/autotest_common.sh@1552 -- # continue 00:04:23.769 06:34:37 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:23.769 06:34:37 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:04:23.769 06:34:37 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:23.769 06:34:37 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:04:23.769 06:34:37 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:04:23.769 06:34:37 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:04:24.028 06:34:37 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:04:24.028 06:34:37 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:04:24.028 06:34:37 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:04:24.028 06:34:37 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:04:24.028 06:34:37 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:24.028 06:34:37 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:24.028 06:34:37 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:24.028 06:34:37 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:24.028 06:34:37 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:24.028 06:34:37 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:24.028 06:34:37 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:04:24.028 06:34:37 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:24.028 06:34:37 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:24.028 06:34:37 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:24.028 06:34:37 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:24.028 06:34:37 -- common/autotest_common.sh@1552 -- # continue 00:04:24.028 06:34:37 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:04:24.028 06:34:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:24.028 06:34:37 -- common/autotest_common.sh@10 -- # set +x 00:04:24.028 06:34:37 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:04:24.028 06:34:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:24.028 06:34:37 -- common/autotest_common.sh@10 -- # set +x 00:04:24.028 06:34:37 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:24.596 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:24.596 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.854 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.854 06:34:38 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:04:24.854 06:34:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:24.854 06:34:38 -- common/autotest_common.sh@10 -- # set +x 00:04:24.854 06:34:38 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:04:24.855 06:34:38 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:04:24.855 06:34:38 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:04:24.855 06:34:38 -- common/autotest_common.sh@1572 -- # bdfs=() 00:04:24.855 06:34:38 -- common/autotest_common.sh@1572 -- # local bdfs 00:04:24.855 06:34:38 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:04:24.855 06:34:38 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:24.855 06:34:38 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:24.855 06:34:38 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:24.855 06:34:38 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:24.855 06:34:38 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:24.855 06:34:38 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:04:24.855 06:34:38 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:04:24.855 06:34:38 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:24.855 06:34:38 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:04:24.855 06:34:38 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:24.855 06:34:38 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:24.855 06:34:38 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:24.855 06:34:38 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:04:24.855 06:34:38 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:24.855 06:34:38 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:24.855 06:34:38 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:04:24.855 06:34:38 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:04:24.855 06:34:38 -- common/autotest_common.sh@1588 -- # return 0 00:04:24.855 06:34:38 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:04:24.855 06:34:38 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:04:24.855 06:34:38 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:24.855 06:34:38 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:24.855 06:34:38 -- spdk/autotest.sh@160 -- # timing_enter lib 00:04:24.855 06:34:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:24.855 06:34:38 -- common/autotest_common.sh@10 -- # set +x 00:04:24.855 06:34:38 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:24.855 06:34:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:24.855 06:34:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:24.855 06:34:38 -- common/autotest_common.sh@10 -- # set +x 00:04:24.855 ************************************ 00:04:24.855 START TEST env 00:04:24.855 ************************************ 00:04:24.855 06:34:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:25.114 * Looking for test storage... 00:04:25.114 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:25.114 06:34:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:25.114 06:34:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:25.114 06:34:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:25.114 06:34:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:25.114 06:34:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:25.114 06:34:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:25.114 06:34:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:25.114 06:34:38 -- scripts/common.sh@335 -- # IFS=.-: 00:04:25.114 06:34:38 -- scripts/common.sh@335 -- # read -ra ver1 00:04:25.114 06:34:38 -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.114 06:34:38 -- scripts/common.sh@336 -- # read -ra ver2 00:04:25.114 06:34:38 -- scripts/common.sh@337 -- # local 'op=<' 00:04:25.114 06:34:38 -- scripts/common.sh@339 -- # ver1_l=2 00:04:25.114 06:34:38 -- scripts/common.sh@340 -- # ver2_l=1 00:04:25.114 06:34:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:25.114 06:34:38 -- scripts/common.sh@343 -- # case "$op" in 00:04:25.114 06:34:38 -- scripts/common.sh@344 -- # : 1 00:04:25.114 06:34:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:25.114 06:34:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.114 06:34:38 -- scripts/common.sh@364 -- # decimal 1 00:04:25.114 06:34:38 -- scripts/common.sh@352 -- # local d=1 00:04:25.114 06:34:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.114 06:34:38 -- scripts/common.sh@354 -- # echo 1 00:04:25.114 06:34:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:25.114 06:34:38 -- scripts/common.sh@365 -- # decimal 2 00:04:25.114 06:34:38 -- scripts/common.sh@352 -- # local d=2 00:04:25.114 06:34:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:25.114 06:34:38 -- scripts/common.sh@354 -- # echo 2 00:04:25.114 06:34:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:25.114 06:34:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:25.114 06:34:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:25.114 06:34:38 -- scripts/common.sh@367 -- # return 0 00:04:25.114 06:34:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:25.114 06:34:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:25.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.114 --rc genhtml_branch_coverage=1 00:04:25.114 --rc genhtml_function_coverage=1 00:04:25.114 --rc genhtml_legend=1 00:04:25.114 --rc geninfo_all_blocks=1 00:04:25.114 --rc geninfo_unexecuted_blocks=1 00:04:25.114 00:04:25.114 ' 00:04:25.114 06:34:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:25.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.114 --rc genhtml_branch_coverage=1 00:04:25.114 --rc genhtml_function_coverage=1 00:04:25.114 --rc genhtml_legend=1 00:04:25.114 --rc geninfo_all_blocks=1 00:04:25.114 --rc geninfo_unexecuted_blocks=1 00:04:25.114 00:04:25.114 ' 00:04:25.114 06:34:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:25.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.114 --rc genhtml_branch_coverage=1 00:04:25.114 --rc genhtml_function_coverage=1 00:04:25.114 --rc genhtml_legend=1 00:04:25.114 --rc geninfo_all_blocks=1 00:04:25.114 --rc geninfo_unexecuted_blocks=1 00:04:25.114 00:04:25.114 ' 00:04:25.114 06:34:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:25.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.114 --rc genhtml_branch_coverage=1 00:04:25.114 --rc genhtml_function_coverage=1 00:04:25.114 --rc genhtml_legend=1 00:04:25.114 --rc geninfo_all_blocks=1 00:04:25.114 --rc geninfo_unexecuted_blocks=1 00:04:25.114 00:04:25.114 ' 00:04:25.114 06:34:38 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:25.114 06:34:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:25.114 06:34:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:25.114 06:34:38 -- common/autotest_common.sh@10 -- # set +x 00:04:25.114 ************************************ 00:04:25.114 START TEST env_memory 00:04:25.114 ************************************ 00:04:25.114 06:34:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:25.114 00:04:25.114 00:04:25.114 CUnit - A unit testing framework for C - Version 2.1-3 00:04:25.114 http://cunit.sourceforge.net/ 00:04:25.114 00:04:25.114 00:04:25.114 Suite: memory 00:04:25.114 Test: alloc and free memory map ...[2024-12-14 06:34:39.050925] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:25.114 passed 00:04:25.114 Test: mem map translation ...[2024-12-14 06:34:39.081520] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:25.114 [2024-12-14 06:34:39.081560] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:25.114 [2024-12-14 06:34:39.081615] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:25.114 [2024-12-14 06:34:39.081625] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:25.373 passed 00:04:25.373 Test: mem map registration ...[2024-12-14 06:34:39.145278] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:25.373 [2024-12-14 06:34:39.145311] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:25.373 passed 00:04:25.373 Test: mem map adjacent registrations ...passed 00:04:25.373 00:04:25.373 Run Summary: Type Total Ran Passed Failed Inactive 00:04:25.373 suites 1 1 n/a 0 0 00:04:25.373 tests 4 4 4 0 0 00:04:25.373 asserts 152 152 152 0 n/a 00:04:25.373 00:04:25.373 Elapsed time = 0.212 seconds 00:04:25.373 00:04:25.373 real 0m0.231s 00:04:25.373 user 0m0.214s 00:04:25.373 sys 0m0.012s 00:04:25.373 06:34:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:25.373 06:34:39 -- common/autotest_common.sh@10 -- # set +x 00:04:25.373 ************************************ 00:04:25.373 END TEST env_memory 00:04:25.373 ************************************ 00:04:25.373 06:34:39 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:25.373 06:34:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:25.373 06:34:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:25.373 06:34:39 -- common/autotest_common.sh@10 -- # set +x 00:04:25.373 ************************************ 00:04:25.373 START TEST env_vtophys 00:04:25.373 ************************************ 00:04:25.373 06:34:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:25.373 EAL: lib.eal log level changed from notice to debug 00:04:25.373 EAL: Detected lcore 0 as core 0 on socket 0 00:04:25.374 EAL: Detected lcore 1 as core 0 on socket 0 00:04:25.374 EAL: Detected lcore 2 as core 0 on socket 0 00:04:25.374 EAL: Detected lcore 3 as core 0 on socket 0 00:04:25.374 EAL: Detected lcore 4 as core 0 on socket 0 00:04:25.374 EAL: Detected lcore 5 as core 0 on socket 0 00:04:25.374 EAL: Detected lcore 6 as core 0 on socket 0 00:04:25.374 EAL: Detected lcore 7 as core 0 on socket 0 00:04:25.374 EAL: Detected lcore 8 as core 0 on socket 0 00:04:25.374 EAL: Detected lcore 9 as core 0 on socket 0 00:04:25.374 EAL: Maximum logical cores by configuration: 128 00:04:25.374 EAL: Detected CPU lcores: 10 00:04:25.374 EAL: Detected NUMA nodes: 1 00:04:25.374 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:25.374 EAL: Detected shared linkage of DPDK 00:04:25.374 EAL: No shared files mode enabled, IPC will be disabled 00:04:25.374 EAL: Selected IOVA mode 'PA' 00:04:25.374 EAL: Probing VFIO support... 00:04:25.374 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:25.374 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:25.374 EAL: Ask a virtual area of 0x2e000 bytes 00:04:25.374 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:25.374 EAL: Setting up physically contiguous memory... 00:04:25.374 EAL: Setting maximum number of open files to 524288 00:04:25.374 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:25.374 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:25.374 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.374 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:25.374 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:25.374 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.374 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:25.374 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:25.374 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.374 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:25.374 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:25.374 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.374 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:25.374 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:25.374 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.374 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:25.374 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:25.374 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.374 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:25.374 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:25.374 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.374 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:25.374 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:25.374 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.374 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:25.374 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:25.374 EAL: Hugepages will be freed exactly as allocated. 00:04:25.374 EAL: No shared files mode enabled, IPC is disabled 00:04:25.374 EAL: No shared files mode enabled, IPC is disabled 00:04:25.633 EAL: TSC frequency is ~2200000 KHz 00:04:25.633 EAL: Main lcore 0 is ready (tid=7f8070839a00;cpuset=[0]) 00:04:25.633 EAL: Trying to obtain current memory policy. 00:04:25.633 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.633 EAL: Restoring previous memory policy: 0 00:04:25.633 EAL: request: mp_malloc_sync 00:04:25.633 EAL: No shared files mode enabled, IPC is disabled 00:04:25.633 EAL: Heap on socket 0 was expanded by 2MB 00:04:25.633 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:25.633 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:25.633 EAL: Mem event callback 'spdk:(nil)' registered 00:04:25.633 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:25.633 00:04:25.633 00:04:25.633 CUnit - A unit testing framework for C - Version 2.1-3 00:04:25.633 http://cunit.sourceforge.net/ 00:04:25.633 00:04:25.633 00:04:25.633 Suite: components_suite 00:04:25.633 Test: vtophys_malloc_test ...passed 00:04:25.633 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:25.633 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.633 EAL: Restoring previous memory policy: 4 00:04:25.633 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.633 EAL: request: mp_malloc_sync 00:04:25.633 EAL: No shared files mode enabled, IPC is disabled 00:04:25.633 EAL: Heap on socket 0 was expanded by 4MB 00:04:25.633 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.633 EAL: request: mp_malloc_sync 00:04:25.633 EAL: No shared files mode enabled, IPC is disabled 00:04:25.633 EAL: Heap on socket 0 was shrunk by 4MB 00:04:25.633 EAL: Trying to obtain current memory policy. 00:04:25.633 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.633 EAL: Restoring previous memory policy: 4 00:04:25.633 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.633 EAL: request: mp_malloc_sync 00:04:25.633 EAL: No shared files mode enabled, IPC is disabled 00:04:25.633 EAL: Heap on socket 0 was expanded by 6MB 00:04:25.633 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.633 EAL: request: mp_malloc_sync 00:04:25.633 EAL: No shared files mode enabled, IPC is disabled 00:04:25.633 EAL: Heap on socket 0 was shrunk by 6MB 00:04:25.633 EAL: Trying to obtain current memory policy. 00:04:25.633 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.633 EAL: Restoring previous memory policy: 4 00:04:25.633 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.633 EAL: request: mp_malloc_sync 00:04:25.633 EAL: No shared files mode enabled, IPC is disabled 00:04:25.633 EAL: Heap on socket 0 was expanded by 10MB 00:04:25.633 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.633 EAL: request: mp_malloc_sync 00:04:25.633 EAL: No shared files mode enabled, IPC is disabled 00:04:25.633 EAL: Heap on socket 0 was shrunk by 10MB 00:04:25.633 EAL: Trying to obtain current memory policy. 00:04:25.633 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.633 EAL: Restoring previous memory policy: 4 00:04:25.633 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.633 EAL: request: mp_malloc_sync 00:04:25.633 EAL: No shared files mode enabled, IPC is disabled 00:04:25.633 EAL: Heap on socket 0 was expanded by 18MB 00:04:25.633 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.633 EAL: request: mp_malloc_sync 00:04:25.633 EAL: No shared files mode enabled, IPC is disabled 00:04:25.633 EAL: Heap on socket 0 was shrunk by 18MB 00:04:25.633 EAL: Trying to obtain current memory policy. 00:04:25.633 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.633 EAL: Restoring previous memory policy: 4 00:04:25.634 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.634 EAL: request: mp_malloc_sync 00:04:25.634 EAL: No shared files mode enabled, IPC is disabled 00:04:25.634 EAL: Heap on socket 0 was expanded by 34MB 00:04:25.634 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.634 EAL: request: mp_malloc_sync 00:04:25.634 EAL: No shared files mode enabled, IPC is disabled 00:04:25.634 EAL: Heap on socket 0 was shrunk by 34MB 00:04:25.634 EAL: Trying to obtain current memory policy. 00:04:25.634 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.634 EAL: Restoring previous memory policy: 4 00:04:25.634 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.634 EAL: request: mp_malloc_sync 00:04:25.634 EAL: No shared files mode enabled, IPC is disabled 00:04:25.634 EAL: Heap on socket 0 was expanded by 66MB 00:04:25.634 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.634 EAL: request: mp_malloc_sync 00:04:25.634 EAL: No shared files mode enabled, IPC is disabled 00:04:25.634 EAL: Heap on socket 0 was shrunk by 66MB 00:04:25.634 EAL: Trying to obtain current memory policy. 00:04:25.634 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.634 EAL: Restoring previous memory policy: 4 00:04:25.634 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.634 EAL: request: mp_malloc_sync 00:04:25.634 EAL: No shared files mode enabled, IPC is disabled 00:04:25.634 EAL: Heap on socket 0 was expanded by 130MB 00:04:25.634 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.634 EAL: request: mp_malloc_sync 00:04:25.634 EAL: No shared files mode enabled, IPC is disabled 00:04:25.634 EAL: Heap on socket 0 was shrunk by 130MB 00:04:25.634 EAL: Trying to obtain current memory policy. 00:04:25.634 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.634 EAL: Restoring previous memory policy: 4 00:04:25.634 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.634 EAL: request: mp_malloc_sync 00:04:25.634 EAL: No shared files mode enabled, IPC is disabled 00:04:25.634 EAL: Heap on socket 0 was expanded by 258MB 00:04:25.634 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.634 EAL: request: mp_malloc_sync 00:04:25.634 EAL: No shared files mode enabled, IPC is disabled 00:04:25.634 EAL: Heap on socket 0 was shrunk by 258MB 00:04:25.634 EAL: Trying to obtain current memory policy. 00:04:25.634 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.893 EAL: Restoring previous memory policy: 4 00:04:25.893 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.893 EAL: request: mp_malloc_sync 00:04:25.893 EAL: No shared files mode enabled, IPC is disabled 00:04:25.893 EAL: Heap on socket 0 was expanded by 514MB 00:04:25.893 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.893 EAL: request: mp_malloc_sync 00:04:25.893 EAL: No shared files mode enabled, IPC is disabled 00:04:25.893 EAL: Heap on socket 0 was shrunk by 514MB 00:04:25.893 EAL: Trying to obtain current memory policy. 00:04:25.893 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.153 EAL: Restoring previous memory policy: 4 00:04:26.153 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.153 EAL: request: mp_malloc_sync 00:04:26.153 EAL: No shared files mode enabled, IPC is disabled 00:04:26.153 EAL: Heap on socket 0 was expanded by 1026MB 00:04:26.153 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.153 passed 00:04:26.153 00:04:26.153 Run Summary: Type Total Ran Passed Failed Inactive 00:04:26.153 suites 1 1 n/a 0 0 00:04:26.153 tests 2 2 2 0 0 00:04:26.153 asserts 5267 5267 5267 0 n/a 00:04:26.153 00:04:26.153 Elapsed time = 0.648 seconds 00:04:26.153 EAL: request: mp_malloc_sync 00:04:26.153 EAL: No shared files mode enabled, IPC is disabled 00:04:26.153 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:26.153 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.153 EAL: request: mp_malloc_sync 00:04:26.153 EAL: No shared files mode enabled, IPC is disabled 00:04:26.153 EAL: Heap on socket 0 was shrunk by 2MB 00:04:26.153 EAL: No shared files mode enabled, IPC is disabled 00:04:26.153 EAL: No shared files mode enabled, IPC is disabled 00:04:26.153 EAL: No shared files mode enabled, IPC is disabled 00:04:26.153 00:04:26.153 real 0m0.840s 00:04:26.153 user 0m0.423s 00:04:26.153 sys 0m0.287s 00:04:26.153 06:34:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:26.153 06:34:40 -- common/autotest_common.sh@10 -- # set +x 00:04:26.153 ************************************ 00:04:26.153 END TEST env_vtophys 00:04:26.153 ************************************ 00:04:26.412 06:34:40 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:26.412 06:34:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:26.412 06:34:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:26.412 06:34:40 -- common/autotest_common.sh@10 -- # set +x 00:04:26.412 ************************************ 00:04:26.412 START TEST env_pci 00:04:26.412 ************************************ 00:04:26.412 06:34:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:26.412 00:04:26.412 00:04:26.412 CUnit - A unit testing framework for C - Version 2.1-3 00:04:26.412 http://cunit.sourceforge.net/ 00:04:26.412 00:04:26.412 00:04:26.412 Suite: pci 00:04:26.412 Test: pci_hook ...[2024-12-14 06:34:40.186265] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 53689 has claimed it 00:04:26.412 passed 00:04:26.412 00:04:26.412 Run Summary: Type Total Ran Passed Failed Inactive 00:04:26.412 suites 1 1 n/a 0 0 00:04:26.412 tests 1 1 1 0 0 00:04:26.412 asserts 25 25 25 0 n/a 00:04:26.412 00:04:26.412 Elapsed time = 0.002 seconds 00:04:26.412 EAL: Cannot find device (10000:00:01.0) 00:04:26.412 EAL: Failed to attach device on primary process 00:04:26.412 00:04:26.412 real 0m0.018s 00:04:26.412 user 0m0.007s 00:04:26.412 sys 0m0.010s 00:04:26.412 06:34:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:26.412 06:34:40 -- common/autotest_common.sh@10 -- # set +x 00:04:26.412 ************************************ 00:04:26.412 END TEST env_pci 00:04:26.412 ************************************ 00:04:26.412 06:34:40 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:26.412 06:34:40 -- env/env.sh@15 -- # uname 00:04:26.412 06:34:40 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:26.412 06:34:40 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:26.412 06:34:40 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:26.412 06:34:40 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:26.412 06:34:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:26.412 06:34:40 -- common/autotest_common.sh@10 -- # set +x 00:04:26.412 ************************************ 00:04:26.412 START TEST env_dpdk_post_init 00:04:26.412 ************************************ 00:04:26.412 06:34:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:26.412 EAL: Detected CPU lcores: 10 00:04:26.412 EAL: Detected NUMA nodes: 1 00:04:26.412 EAL: Detected shared linkage of DPDK 00:04:26.412 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:26.412 EAL: Selected IOVA mode 'PA' 00:04:26.412 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:26.671 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:04:26.671 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:04:26.671 Starting DPDK initialization... 00:04:26.671 Starting SPDK post initialization... 00:04:26.671 SPDK NVMe probe 00:04:26.671 Attaching to 0000:00:06.0 00:04:26.671 Attaching to 0000:00:07.0 00:04:26.671 Attached to 0000:00:06.0 00:04:26.671 Attached to 0000:00:07.0 00:04:26.671 Cleaning up... 00:04:26.671 00:04:26.671 real 0m0.169s 00:04:26.671 user 0m0.041s 00:04:26.671 sys 0m0.028s 00:04:26.671 06:34:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:26.671 06:34:40 -- common/autotest_common.sh@10 -- # set +x 00:04:26.671 ************************************ 00:04:26.671 END TEST env_dpdk_post_init 00:04:26.671 ************************************ 00:04:26.671 06:34:40 -- env/env.sh@26 -- # uname 00:04:26.671 06:34:40 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:26.671 06:34:40 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:26.671 06:34:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:26.671 06:34:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:26.671 06:34:40 -- common/autotest_common.sh@10 -- # set +x 00:04:26.671 ************************************ 00:04:26.671 START TEST env_mem_callbacks 00:04:26.671 ************************************ 00:04:26.671 06:34:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:26.671 EAL: Detected CPU lcores: 10 00:04:26.671 EAL: Detected NUMA nodes: 1 00:04:26.671 EAL: Detected shared linkage of DPDK 00:04:26.671 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:26.671 EAL: Selected IOVA mode 'PA' 00:04:26.671 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:26.671 00:04:26.671 00:04:26.671 CUnit - A unit testing framework for C - Version 2.1-3 00:04:26.671 http://cunit.sourceforge.net/ 00:04:26.671 00:04:26.671 00:04:26.671 Suite: memory 00:04:26.671 Test: test ... 00:04:26.671 register 0x200000200000 2097152 00:04:26.671 malloc 3145728 00:04:26.671 register 0x200000400000 4194304 00:04:26.671 buf 0x200000500000 len 3145728 PASSED 00:04:26.671 malloc 64 00:04:26.671 buf 0x2000004fff40 len 64 PASSED 00:04:26.671 malloc 4194304 00:04:26.671 register 0x200000800000 6291456 00:04:26.671 buf 0x200000a00000 len 4194304 PASSED 00:04:26.671 free 0x200000500000 3145728 00:04:26.671 free 0x2000004fff40 64 00:04:26.671 unregister 0x200000400000 4194304 PASSED 00:04:26.671 free 0x200000a00000 4194304 00:04:26.671 unregister 0x200000800000 6291456 PASSED 00:04:26.671 malloc 8388608 00:04:26.671 register 0x200000400000 10485760 00:04:26.671 buf 0x200000600000 len 8388608 PASSED 00:04:26.671 free 0x200000600000 8388608 00:04:26.671 unregister 0x200000400000 10485760 PASSED 00:04:26.671 passed 00:04:26.671 00:04:26.671 Run Summary: Type Total Ran Passed Failed Inactive 00:04:26.671 suites 1 1 n/a 0 0 00:04:26.671 tests 1 1 1 0 0 00:04:26.671 asserts 15 15 15 0 n/a 00:04:26.671 00:04:26.671 Elapsed time = 0.007 seconds 00:04:26.671 00:04:26.671 real 0m0.141s 00:04:26.671 user 0m0.020s 00:04:26.671 sys 0m0.019s 00:04:26.671 06:34:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:26.671 06:34:40 -- common/autotest_common.sh@10 -- # set +x 00:04:26.671 ************************************ 00:04:26.671 END TEST env_mem_callbacks 00:04:26.671 ************************************ 00:04:26.671 00:04:26.671 real 0m1.844s 00:04:26.671 user 0m0.912s 00:04:26.671 sys 0m0.571s 00:04:26.671 06:34:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:26.671 06:34:40 -- common/autotest_common.sh@10 -- # set +x 00:04:26.671 ************************************ 00:04:26.671 END TEST env 00:04:26.671 ************************************ 00:04:26.938 06:34:40 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:26.938 06:34:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:26.938 06:34:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:26.938 06:34:40 -- common/autotest_common.sh@10 -- # set +x 00:04:26.938 ************************************ 00:04:26.938 START TEST rpc 00:04:26.938 ************************************ 00:04:26.938 06:34:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:26.938 * Looking for test storage... 00:04:26.938 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:26.938 06:34:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:26.938 06:34:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:26.938 06:34:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:26.938 06:34:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:26.938 06:34:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:26.938 06:34:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:26.938 06:34:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:26.938 06:34:40 -- scripts/common.sh@335 -- # IFS=.-: 00:04:26.938 06:34:40 -- scripts/common.sh@335 -- # read -ra ver1 00:04:26.938 06:34:40 -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.938 06:34:40 -- scripts/common.sh@336 -- # read -ra ver2 00:04:26.938 06:34:40 -- scripts/common.sh@337 -- # local 'op=<' 00:04:26.938 06:34:40 -- scripts/common.sh@339 -- # ver1_l=2 00:04:26.938 06:34:40 -- scripts/common.sh@340 -- # ver2_l=1 00:04:26.938 06:34:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:26.938 06:34:40 -- scripts/common.sh@343 -- # case "$op" in 00:04:26.938 06:34:40 -- scripts/common.sh@344 -- # : 1 00:04:26.938 06:34:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:26.938 06:34:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.938 06:34:40 -- scripts/common.sh@364 -- # decimal 1 00:04:26.938 06:34:40 -- scripts/common.sh@352 -- # local d=1 00:04:26.938 06:34:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.938 06:34:40 -- scripts/common.sh@354 -- # echo 1 00:04:26.938 06:34:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:26.938 06:34:40 -- scripts/common.sh@365 -- # decimal 2 00:04:26.938 06:34:40 -- scripts/common.sh@352 -- # local d=2 00:04:26.938 06:34:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.938 06:34:40 -- scripts/common.sh@354 -- # echo 2 00:04:26.938 06:34:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:26.938 06:34:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:26.938 06:34:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:26.938 06:34:40 -- scripts/common.sh@367 -- # return 0 00:04:26.938 06:34:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.938 06:34:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:26.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.938 --rc genhtml_branch_coverage=1 00:04:26.938 --rc genhtml_function_coverage=1 00:04:26.938 --rc genhtml_legend=1 00:04:26.938 --rc geninfo_all_blocks=1 00:04:26.938 --rc geninfo_unexecuted_blocks=1 00:04:26.938 00:04:26.938 ' 00:04:26.938 06:34:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:26.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.938 --rc genhtml_branch_coverage=1 00:04:26.938 --rc genhtml_function_coverage=1 00:04:26.938 --rc genhtml_legend=1 00:04:26.938 --rc geninfo_all_blocks=1 00:04:26.938 --rc geninfo_unexecuted_blocks=1 00:04:26.938 00:04:26.938 ' 00:04:26.938 06:34:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:26.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.938 --rc genhtml_branch_coverage=1 00:04:26.938 --rc genhtml_function_coverage=1 00:04:26.938 --rc genhtml_legend=1 00:04:26.938 --rc geninfo_all_blocks=1 00:04:26.938 --rc geninfo_unexecuted_blocks=1 00:04:26.938 00:04:26.938 ' 00:04:26.938 06:34:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:26.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.938 --rc genhtml_branch_coverage=1 00:04:26.938 --rc genhtml_function_coverage=1 00:04:26.938 --rc genhtml_legend=1 00:04:26.938 --rc geninfo_all_blocks=1 00:04:26.938 --rc geninfo_unexecuted_blocks=1 00:04:26.938 00:04:26.938 ' 00:04:26.938 06:34:40 -- rpc/rpc.sh@65 -- # spdk_pid=53806 00:04:26.938 06:34:40 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:26.938 06:34:40 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:26.938 06:34:40 -- rpc/rpc.sh@67 -- # waitforlisten 53806 00:04:26.938 06:34:40 -- common/autotest_common.sh@829 -- # '[' -z 53806 ']' 00:04:26.938 06:34:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.938 06:34:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:26.938 06:34:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.938 06:34:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:26.938 06:34:40 -- common/autotest_common.sh@10 -- # set +x 00:04:27.227 [2024-12-14 06:34:40.945739] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:27.227 [2024-12-14 06:34:40.945875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53806 ] 00:04:27.227 [2024-12-14 06:34:41.079231] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.227 [2024-12-14 06:34:41.150990] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:27.227 [2024-12-14 06:34:41.151148] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:27.227 [2024-12-14 06:34:41.151161] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 53806' to capture a snapshot of events at runtime. 00:04:27.227 [2024-12-14 06:34:41.151168] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid53806 for offline analysis/debug. 00:04:27.227 [2024-12-14 06:34:41.151191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.202 06:34:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:28.202 06:34:41 -- common/autotest_common.sh@862 -- # return 0 00:04:28.202 06:34:41 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:28.202 06:34:41 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:28.202 06:34:41 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:28.202 06:34:41 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:28.202 06:34:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:28.202 06:34:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:28.202 06:34:41 -- common/autotest_common.sh@10 -- # set +x 00:04:28.202 ************************************ 00:04:28.202 START TEST rpc_integrity 00:04:28.202 ************************************ 00:04:28.202 06:34:41 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:28.202 06:34:41 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:28.202 06:34:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.202 06:34:41 -- common/autotest_common.sh@10 -- # set +x 00:04:28.202 06:34:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.202 06:34:41 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:28.202 06:34:41 -- rpc/rpc.sh@13 -- # jq length 00:04:28.202 06:34:42 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:28.202 06:34:42 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:28.202 06:34:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.202 06:34:42 -- common/autotest_common.sh@10 -- # set +x 00:04:28.202 06:34:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.202 06:34:42 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:28.202 06:34:42 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:28.202 06:34:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.202 06:34:42 -- common/autotest_common.sh@10 -- # set +x 00:04:28.202 06:34:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.202 06:34:42 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:28.202 { 00:04:28.202 "name": "Malloc0", 00:04:28.202 "aliases": [ 00:04:28.202 "a21daab9-40f5-4bff-856d-33595579c23c" 00:04:28.202 ], 00:04:28.202 "product_name": "Malloc disk", 00:04:28.202 "block_size": 512, 00:04:28.203 "num_blocks": 16384, 00:04:28.203 "uuid": "a21daab9-40f5-4bff-856d-33595579c23c", 00:04:28.203 "assigned_rate_limits": { 00:04:28.203 "rw_ios_per_sec": 0, 00:04:28.203 "rw_mbytes_per_sec": 0, 00:04:28.203 "r_mbytes_per_sec": 0, 00:04:28.203 "w_mbytes_per_sec": 0 00:04:28.203 }, 00:04:28.203 "claimed": false, 00:04:28.203 "zoned": false, 00:04:28.203 "supported_io_types": { 00:04:28.203 "read": true, 00:04:28.203 "write": true, 00:04:28.203 "unmap": true, 00:04:28.203 "write_zeroes": true, 00:04:28.203 "flush": true, 00:04:28.203 "reset": true, 00:04:28.203 "compare": false, 00:04:28.203 "compare_and_write": false, 00:04:28.203 "abort": true, 00:04:28.203 "nvme_admin": false, 00:04:28.203 "nvme_io": false 00:04:28.203 }, 00:04:28.203 "memory_domains": [ 00:04:28.203 { 00:04:28.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.203 "dma_device_type": 2 00:04:28.203 } 00:04:28.203 ], 00:04:28.203 "driver_specific": {} 00:04:28.203 } 00:04:28.203 ]' 00:04:28.203 06:34:42 -- rpc/rpc.sh@17 -- # jq length 00:04:28.203 06:34:42 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:28.203 06:34:42 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:28.203 06:34:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.203 06:34:42 -- common/autotest_common.sh@10 -- # set +x 00:04:28.203 [2024-12-14 06:34:42.113831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:28.203 [2024-12-14 06:34:42.114081] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:28.203 [2024-12-14 06:34:42.114108] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22254c0 00:04:28.203 [2024-12-14 06:34:42.114118] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:28.203 [2024-12-14 06:34:42.115657] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:28.203 [2024-12-14 06:34:42.115696] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:28.203 Passthru0 00:04:28.203 06:34:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.203 06:34:42 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:28.203 06:34:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.203 06:34:42 -- common/autotest_common.sh@10 -- # set +x 00:04:28.203 06:34:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.203 06:34:42 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:28.203 { 00:04:28.203 "name": "Malloc0", 00:04:28.203 "aliases": [ 00:04:28.203 "a21daab9-40f5-4bff-856d-33595579c23c" 00:04:28.203 ], 00:04:28.203 "product_name": "Malloc disk", 00:04:28.203 "block_size": 512, 00:04:28.203 "num_blocks": 16384, 00:04:28.203 "uuid": "a21daab9-40f5-4bff-856d-33595579c23c", 00:04:28.203 "assigned_rate_limits": { 00:04:28.203 "rw_ios_per_sec": 0, 00:04:28.203 "rw_mbytes_per_sec": 0, 00:04:28.203 "r_mbytes_per_sec": 0, 00:04:28.203 "w_mbytes_per_sec": 0 00:04:28.203 }, 00:04:28.203 "claimed": true, 00:04:28.203 "claim_type": "exclusive_write", 00:04:28.203 "zoned": false, 00:04:28.203 "supported_io_types": { 00:04:28.203 "read": true, 00:04:28.203 "write": true, 00:04:28.203 "unmap": true, 00:04:28.203 "write_zeroes": true, 00:04:28.203 "flush": true, 00:04:28.203 "reset": true, 00:04:28.203 "compare": false, 00:04:28.203 "compare_and_write": false, 00:04:28.203 "abort": true, 00:04:28.203 "nvme_admin": false, 00:04:28.203 "nvme_io": false 00:04:28.203 }, 00:04:28.203 "memory_domains": [ 00:04:28.203 { 00:04:28.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.203 "dma_device_type": 2 00:04:28.203 } 00:04:28.203 ], 00:04:28.203 "driver_specific": {} 00:04:28.203 }, 00:04:28.203 { 00:04:28.203 "name": "Passthru0", 00:04:28.203 "aliases": [ 00:04:28.203 "a2e262bd-f973-5f82-8c65-6a727aeb27a6" 00:04:28.203 ], 00:04:28.203 "product_name": "passthru", 00:04:28.203 "block_size": 512, 00:04:28.203 "num_blocks": 16384, 00:04:28.203 "uuid": "a2e262bd-f973-5f82-8c65-6a727aeb27a6", 00:04:28.203 "assigned_rate_limits": { 00:04:28.203 "rw_ios_per_sec": 0, 00:04:28.203 "rw_mbytes_per_sec": 0, 00:04:28.203 "r_mbytes_per_sec": 0, 00:04:28.203 "w_mbytes_per_sec": 0 00:04:28.203 }, 00:04:28.203 "claimed": false, 00:04:28.203 "zoned": false, 00:04:28.203 "supported_io_types": { 00:04:28.203 "read": true, 00:04:28.203 "write": true, 00:04:28.203 "unmap": true, 00:04:28.203 "write_zeroes": true, 00:04:28.203 "flush": true, 00:04:28.203 "reset": true, 00:04:28.203 "compare": false, 00:04:28.203 "compare_and_write": false, 00:04:28.203 "abort": true, 00:04:28.203 "nvme_admin": false, 00:04:28.203 "nvme_io": false 00:04:28.203 }, 00:04:28.203 "memory_domains": [ 00:04:28.203 { 00:04:28.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.203 "dma_device_type": 2 00:04:28.203 } 00:04:28.203 ], 00:04:28.203 "driver_specific": { 00:04:28.203 "passthru": { 00:04:28.203 "name": "Passthru0", 00:04:28.203 "base_bdev_name": "Malloc0" 00:04:28.203 } 00:04:28.203 } 00:04:28.203 } 00:04:28.203 ]' 00:04:28.203 06:34:42 -- rpc/rpc.sh@21 -- # jq length 00:04:28.462 06:34:42 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:28.462 06:34:42 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:28.462 06:34:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.462 06:34:42 -- common/autotest_common.sh@10 -- # set +x 00:04:28.462 06:34:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.462 06:34:42 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:28.462 06:34:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.462 06:34:42 -- common/autotest_common.sh@10 -- # set +x 00:04:28.462 06:34:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.462 06:34:42 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:28.462 06:34:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.462 06:34:42 -- common/autotest_common.sh@10 -- # set +x 00:04:28.462 06:34:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.462 06:34:42 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:28.462 06:34:42 -- rpc/rpc.sh@26 -- # jq length 00:04:28.463 ************************************ 00:04:28.463 END TEST rpc_integrity 00:04:28.463 ************************************ 00:04:28.463 06:34:42 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:28.463 00:04:28.463 real 0m0.324s 00:04:28.463 user 0m0.217s 00:04:28.463 sys 0m0.037s 00:04:28.463 06:34:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:28.463 06:34:42 -- common/autotest_common.sh@10 -- # set +x 00:04:28.463 06:34:42 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:28.463 06:34:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:28.463 06:34:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:28.463 06:34:42 -- common/autotest_common.sh@10 -- # set +x 00:04:28.463 ************************************ 00:04:28.463 START TEST rpc_plugins 00:04:28.463 ************************************ 00:04:28.463 06:34:42 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:04:28.463 06:34:42 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:28.463 06:34:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.463 06:34:42 -- common/autotest_common.sh@10 -- # set +x 00:04:28.463 06:34:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.463 06:34:42 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:28.463 06:34:42 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:28.463 06:34:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.463 06:34:42 -- common/autotest_common.sh@10 -- # set +x 00:04:28.463 06:34:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.463 06:34:42 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:28.463 { 00:04:28.463 "name": "Malloc1", 00:04:28.463 "aliases": [ 00:04:28.463 "b0f20acb-4a78-4726-8da7-fd4126a29dc2" 00:04:28.463 ], 00:04:28.463 "product_name": "Malloc disk", 00:04:28.463 "block_size": 4096, 00:04:28.463 "num_blocks": 256, 00:04:28.463 "uuid": "b0f20acb-4a78-4726-8da7-fd4126a29dc2", 00:04:28.463 "assigned_rate_limits": { 00:04:28.463 "rw_ios_per_sec": 0, 00:04:28.463 "rw_mbytes_per_sec": 0, 00:04:28.463 "r_mbytes_per_sec": 0, 00:04:28.463 "w_mbytes_per_sec": 0 00:04:28.463 }, 00:04:28.463 "claimed": false, 00:04:28.463 "zoned": false, 00:04:28.463 "supported_io_types": { 00:04:28.463 "read": true, 00:04:28.463 "write": true, 00:04:28.463 "unmap": true, 00:04:28.463 "write_zeroes": true, 00:04:28.463 "flush": true, 00:04:28.463 "reset": true, 00:04:28.463 "compare": false, 00:04:28.463 "compare_and_write": false, 00:04:28.463 "abort": true, 00:04:28.463 "nvme_admin": false, 00:04:28.463 "nvme_io": false 00:04:28.463 }, 00:04:28.463 "memory_domains": [ 00:04:28.463 { 00:04:28.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.463 "dma_device_type": 2 00:04:28.463 } 00:04:28.463 ], 00:04:28.463 "driver_specific": {} 00:04:28.463 } 00:04:28.463 ]' 00:04:28.463 06:34:42 -- rpc/rpc.sh@32 -- # jq length 00:04:28.463 06:34:42 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:28.463 06:34:42 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:28.463 06:34:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.463 06:34:42 -- common/autotest_common.sh@10 -- # set +x 00:04:28.463 06:34:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.463 06:34:42 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:28.463 06:34:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.463 06:34:42 -- common/autotest_common.sh@10 -- # set +x 00:04:28.463 06:34:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.463 06:34:42 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:28.463 06:34:42 -- rpc/rpc.sh@36 -- # jq length 00:04:28.722 ************************************ 00:04:28.722 END TEST rpc_plugins 00:04:28.722 ************************************ 00:04:28.722 06:34:42 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:28.722 00:04:28.722 real 0m0.155s 00:04:28.722 user 0m0.103s 00:04:28.722 sys 0m0.019s 00:04:28.722 06:34:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:28.722 06:34:42 -- common/autotest_common.sh@10 -- # set +x 00:04:28.722 06:34:42 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:28.722 06:34:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:28.722 06:34:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:28.722 06:34:42 -- common/autotest_common.sh@10 -- # set +x 00:04:28.722 ************************************ 00:04:28.722 START TEST rpc_trace_cmd_test 00:04:28.722 ************************************ 00:04:28.722 06:34:42 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:04:28.722 06:34:42 -- rpc/rpc.sh@40 -- # local info 00:04:28.722 06:34:42 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:28.722 06:34:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.722 06:34:42 -- common/autotest_common.sh@10 -- # set +x 00:04:28.722 06:34:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.722 06:34:42 -- rpc/rpc.sh@42 -- # info='{ 00:04:28.722 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid53806", 00:04:28.722 "tpoint_group_mask": "0x8", 00:04:28.722 "iscsi_conn": { 00:04:28.722 "mask": "0x2", 00:04:28.722 "tpoint_mask": "0x0" 00:04:28.722 }, 00:04:28.722 "scsi": { 00:04:28.722 "mask": "0x4", 00:04:28.722 "tpoint_mask": "0x0" 00:04:28.722 }, 00:04:28.722 "bdev": { 00:04:28.722 "mask": "0x8", 00:04:28.722 "tpoint_mask": "0xffffffffffffffff" 00:04:28.722 }, 00:04:28.722 "nvmf_rdma": { 00:04:28.722 "mask": "0x10", 00:04:28.722 "tpoint_mask": "0x0" 00:04:28.722 }, 00:04:28.722 "nvmf_tcp": { 00:04:28.722 "mask": "0x20", 00:04:28.722 "tpoint_mask": "0x0" 00:04:28.722 }, 00:04:28.722 "ftl": { 00:04:28.722 "mask": "0x40", 00:04:28.722 "tpoint_mask": "0x0" 00:04:28.722 }, 00:04:28.722 "blobfs": { 00:04:28.722 "mask": "0x80", 00:04:28.722 "tpoint_mask": "0x0" 00:04:28.722 }, 00:04:28.722 "dsa": { 00:04:28.722 "mask": "0x200", 00:04:28.722 "tpoint_mask": "0x0" 00:04:28.722 }, 00:04:28.722 "thread": { 00:04:28.722 "mask": "0x400", 00:04:28.722 "tpoint_mask": "0x0" 00:04:28.722 }, 00:04:28.722 "nvme_pcie": { 00:04:28.722 "mask": "0x800", 00:04:28.722 "tpoint_mask": "0x0" 00:04:28.722 }, 00:04:28.722 "iaa": { 00:04:28.722 "mask": "0x1000", 00:04:28.722 "tpoint_mask": "0x0" 00:04:28.722 }, 00:04:28.722 "nvme_tcp": { 00:04:28.722 "mask": "0x2000", 00:04:28.722 "tpoint_mask": "0x0" 00:04:28.722 }, 00:04:28.722 "bdev_nvme": { 00:04:28.722 "mask": "0x4000", 00:04:28.722 "tpoint_mask": "0x0" 00:04:28.722 } 00:04:28.722 }' 00:04:28.722 06:34:42 -- rpc/rpc.sh@43 -- # jq length 00:04:28.722 06:34:42 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:28.722 06:34:42 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:28.722 06:34:42 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:28.722 06:34:42 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:28.981 06:34:42 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:28.981 06:34:42 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:28.981 06:34:42 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:28.981 06:34:42 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:28.981 ************************************ 00:04:28.981 END TEST rpc_trace_cmd_test 00:04:28.981 ************************************ 00:04:28.981 06:34:42 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:28.981 00:04:28.981 real 0m0.278s 00:04:28.981 user 0m0.240s 00:04:28.981 sys 0m0.027s 00:04:28.981 06:34:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:28.981 06:34:42 -- common/autotest_common.sh@10 -- # set +x 00:04:28.981 06:34:42 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:28.981 06:34:42 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:28.981 06:34:42 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:28.981 06:34:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:28.981 06:34:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:28.981 06:34:42 -- common/autotest_common.sh@10 -- # set +x 00:04:28.981 ************************************ 00:04:28.981 START TEST rpc_daemon_integrity 00:04:28.981 ************************************ 00:04:28.981 06:34:42 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:28.981 06:34:42 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:28.981 06:34:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.981 06:34:42 -- common/autotest_common.sh@10 -- # set +x 00:04:28.981 06:34:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.981 06:34:42 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:28.981 06:34:42 -- rpc/rpc.sh@13 -- # jq length 00:04:28.981 06:34:42 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:28.981 06:34:42 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:28.981 06:34:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.981 06:34:42 -- common/autotest_common.sh@10 -- # set +x 00:04:28.981 06:34:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.981 06:34:42 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:28.981 06:34:42 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:28.981 06:34:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.981 06:34:42 -- common/autotest_common.sh@10 -- # set +x 00:04:29.240 06:34:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.240 06:34:42 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:29.240 { 00:04:29.240 "name": "Malloc2", 00:04:29.240 "aliases": [ 00:04:29.240 "7a56e7e4-7a4a-402d-a605-78818f62d053" 00:04:29.240 ], 00:04:29.241 "product_name": "Malloc disk", 00:04:29.241 "block_size": 512, 00:04:29.241 "num_blocks": 16384, 00:04:29.241 "uuid": "7a56e7e4-7a4a-402d-a605-78818f62d053", 00:04:29.241 "assigned_rate_limits": { 00:04:29.241 "rw_ios_per_sec": 0, 00:04:29.241 "rw_mbytes_per_sec": 0, 00:04:29.241 "r_mbytes_per_sec": 0, 00:04:29.241 "w_mbytes_per_sec": 0 00:04:29.241 }, 00:04:29.241 "claimed": false, 00:04:29.241 "zoned": false, 00:04:29.241 "supported_io_types": { 00:04:29.241 "read": true, 00:04:29.241 "write": true, 00:04:29.241 "unmap": true, 00:04:29.241 "write_zeroes": true, 00:04:29.241 "flush": true, 00:04:29.241 "reset": true, 00:04:29.241 "compare": false, 00:04:29.241 "compare_and_write": false, 00:04:29.241 "abort": true, 00:04:29.241 "nvme_admin": false, 00:04:29.241 "nvme_io": false 00:04:29.241 }, 00:04:29.241 "memory_domains": [ 00:04:29.241 { 00:04:29.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.241 "dma_device_type": 2 00:04:29.241 } 00:04:29.241 ], 00:04:29.241 "driver_specific": {} 00:04:29.241 } 00:04:29.241 ]' 00:04:29.241 06:34:42 -- rpc/rpc.sh@17 -- # jq length 00:04:29.241 06:34:43 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:29.241 06:34:43 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:29.241 06:34:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.241 06:34:43 -- common/autotest_common.sh@10 -- # set +x 00:04:29.241 [2024-12-14 06:34:43.038265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:29.241 [2024-12-14 06:34:43.038354] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:29.241 [2024-12-14 06:34:43.038386] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2225c40 00:04:29.241 [2024-12-14 06:34:43.038394] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:29.241 [2024-12-14 06:34:43.039631] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:29.241 [2024-12-14 06:34:43.039664] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:29.241 Passthru0 00:04:29.241 06:34:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.241 06:34:43 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:29.241 06:34:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.241 06:34:43 -- common/autotest_common.sh@10 -- # set +x 00:04:29.241 06:34:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.241 06:34:43 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:29.241 { 00:04:29.241 "name": "Malloc2", 00:04:29.241 "aliases": [ 00:04:29.241 "7a56e7e4-7a4a-402d-a605-78818f62d053" 00:04:29.241 ], 00:04:29.241 "product_name": "Malloc disk", 00:04:29.241 "block_size": 512, 00:04:29.241 "num_blocks": 16384, 00:04:29.241 "uuid": "7a56e7e4-7a4a-402d-a605-78818f62d053", 00:04:29.241 "assigned_rate_limits": { 00:04:29.241 "rw_ios_per_sec": 0, 00:04:29.241 "rw_mbytes_per_sec": 0, 00:04:29.241 "r_mbytes_per_sec": 0, 00:04:29.241 "w_mbytes_per_sec": 0 00:04:29.241 }, 00:04:29.241 "claimed": true, 00:04:29.241 "claim_type": "exclusive_write", 00:04:29.241 "zoned": false, 00:04:29.241 "supported_io_types": { 00:04:29.241 "read": true, 00:04:29.241 "write": true, 00:04:29.241 "unmap": true, 00:04:29.241 "write_zeroes": true, 00:04:29.241 "flush": true, 00:04:29.241 "reset": true, 00:04:29.241 "compare": false, 00:04:29.241 "compare_and_write": false, 00:04:29.241 "abort": true, 00:04:29.241 "nvme_admin": false, 00:04:29.241 "nvme_io": false 00:04:29.241 }, 00:04:29.241 "memory_domains": [ 00:04:29.241 { 00:04:29.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.241 "dma_device_type": 2 00:04:29.241 } 00:04:29.241 ], 00:04:29.241 "driver_specific": {} 00:04:29.241 }, 00:04:29.241 { 00:04:29.241 "name": "Passthru0", 00:04:29.241 "aliases": [ 00:04:29.241 "ef438ffc-cf0c-59dc-80a4-56100e403ea7" 00:04:29.241 ], 00:04:29.241 "product_name": "passthru", 00:04:29.241 "block_size": 512, 00:04:29.241 "num_blocks": 16384, 00:04:29.241 "uuid": "ef438ffc-cf0c-59dc-80a4-56100e403ea7", 00:04:29.241 "assigned_rate_limits": { 00:04:29.241 "rw_ios_per_sec": 0, 00:04:29.241 "rw_mbytes_per_sec": 0, 00:04:29.241 "r_mbytes_per_sec": 0, 00:04:29.241 "w_mbytes_per_sec": 0 00:04:29.241 }, 00:04:29.241 "claimed": false, 00:04:29.241 "zoned": false, 00:04:29.241 "supported_io_types": { 00:04:29.241 "read": true, 00:04:29.241 "write": true, 00:04:29.241 "unmap": true, 00:04:29.241 "write_zeroes": true, 00:04:29.241 "flush": true, 00:04:29.241 "reset": true, 00:04:29.241 "compare": false, 00:04:29.241 "compare_and_write": false, 00:04:29.241 "abort": true, 00:04:29.241 "nvme_admin": false, 00:04:29.241 "nvme_io": false 00:04:29.241 }, 00:04:29.241 "memory_domains": [ 00:04:29.241 { 00:04:29.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.241 "dma_device_type": 2 00:04:29.241 } 00:04:29.241 ], 00:04:29.241 "driver_specific": { 00:04:29.241 "passthru": { 00:04:29.241 "name": "Passthru0", 00:04:29.241 "base_bdev_name": "Malloc2" 00:04:29.241 } 00:04:29.241 } 00:04:29.241 } 00:04:29.241 ]' 00:04:29.241 06:34:43 -- rpc/rpc.sh@21 -- # jq length 00:04:29.241 06:34:43 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:29.241 06:34:43 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:29.241 06:34:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.241 06:34:43 -- common/autotest_common.sh@10 -- # set +x 00:04:29.241 06:34:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.241 06:34:43 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:29.241 06:34:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.241 06:34:43 -- common/autotest_common.sh@10 -- # set +x 00:04:29.241 06:34:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.241 06:34:43 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:29.241 06:34:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.241 06:34:43 -- common/autotest_common.sh@10 -- # set +x 00:04:29.241 06:34:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.241 06:34:43 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:29.241 06:34:43 -- rpc/rpc.sh@26 -- # jq length 00:04:29.241 ************************************ 00:04:29.241 END TEST rpc_daemon_integrity 00:04:29.241 ************************************ 00:04:29.241 06:34:43 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:29.241 00:04:29.241 real 0m0.320s 00:04:29.241 user 0m0.221s 00:04:29.241 sys 0m0.034s 00:04:29.241 06:34:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:29.241 06:34:43 -- common/autotest_common.sh@10 -- # set +x 00:04:29.500 06:34:43 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:29.500 06:34:43 -- rpc/rpc.sh@84 -- # killprocess 53806 00:04:29.500 06:34:43 -- common/autotest_common.sh@936 -- # '[' -z 53806 ']' 00:04:29.500 06:34:43 -- common/autotest_common.sh@940 -- # kill -0 53806 00:04:29.500 06:34:43 -- common/autotest_common.sh@941 -- # uname 00:04:29.500 06:34:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:29.500 06:34:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 53806 00:04:29.500 killing process with pid 53806 00:04:29.500 06:34:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:29.500 06:34:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:29.500 06:34:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 53806' 00:04:29.500 06:34:43 -- common/autotest_common.sh@955 -- # kill 53806 00:04:29.500 06:34:43 -- common/autotest_common.sh@960 -- # wait 53806 00:04:29.759 00:04:29.759 real 0m2.842s 00:04:29.759 user 0m3.838s 00:04:29.759 sys 0m0.571s 00:04:29.759 06:34:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:29.759 ************************************ 00:04:29.759 END TEST rpc 00:04:29.759 ************************************ 00:04:29.759 06:34:43 -- common/autotest_common.sh@10 -- # set +x 00:04:29.759 06:34:43 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:29.759 06:34:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:29.759 06:34:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:29.759 06:34:43 -- common/autotest_common.sh@10 -- # set +x 00:04:29.759 ************************************ 00:04:29.759 START TEST rpc_client 00:04:29.759 ************************************ 00:04:29.759 06:34:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:29.759 * Looking for test storage... 00:04:29.759 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:29.759 06:34:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:29.759 06:34:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:29.759 06:34:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:30.018 06:34:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:30.018 06:34:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:30.018 06:34:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:30.018 06:34:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:30.018 06:34:43 -- scripts/common.sh@335 -- # IFS=.-: 00:04:30.018 06:34:43 -- scripts/common.sh@335 -- # read -ra ver1 00:04:30.018 06:34:43 -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.018 06:34:43 -- scripts/common.sh@336 -- # read -ra ver2 00:04:30.018 06:34:43 -- scripts/common.sh@337 -- # local 'op=<' 00:04:30.018 06:34:43 -- scripts/common.sh@339 -- # ver1_l=2 00:04:30.018 06:34:43 -- scripts/common.sh@340 -- # ver2_l=1 00:04:30.018 06:34:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:30.018 06:34:43 -- scripts/common.sh@343 -- # case "$op" in 00:04:30.018 06:34:43 -- scripts/common.sh@344 -- # : 1 00:04:30.018 06:34:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:30.018 06:34:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.018 06:34:43 -- scripts/common.sh@364 -- # decimal 1 00:04:30.018 06:34:43 -- scripts/common.sh@352 -- # local d=1 00:04:30.019 06:34:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.019 06:34:43 -- scripts/common.sh@354 -- # echo 1 00:04:30.019 06:34:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:30.019 06:34:43 -- scripts/common.sh@365 -- # decimal 2 00:04:30.019 06:34:43 -- scripts/common.sh@352 -- # local d=2 00:04:30.019 06:34:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.019 06:34:43 -- scripts/common.sh@354 -- # echo 2 00:04:30.019 06:34:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:30.019 06:34:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:30.019 06:34:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:30.019 06:34:43 -- scripts/common.sh@367 -- # return 0 00:04:30.019 06:34:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.019 06:34:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:30.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.019 --rc genhtml_branch_coverage=1 00:04:30.019 --rc genhtml_function_coverage=1 00:04:30.019 --rc genhtml_legend=1 00:04:30.019 --rc geninfo_all_blocks=1 00:04:30.019 --rc geninfo_unexecuted_blocks=1 00:04:30.019 00:04:30.019 ' 00:04:30.019 06:34:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:30.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.019 --rc genhtml_branch_coverage=1 00:04:30.019 --rc genhtml_function_coverage=1 00:04:30.019 --rc genhtml_legend=1 00:04:30.019 --rc geninfo_all_blocks=1 00:04:30.019 --rc geninfo_unexecuted_blocks=1 00:04:30.019 00:04:30.019 ' 00:04:30.019 06:34:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:30.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.019 --rc genhtml_branch_coverage=1 00:04:30.019 --rc genhtml_function_coverage=1 00:04:30.019 --rc genhtml_legend=1 00:04:30.019 --rc geninfo_all_blocks=1 00:04:30.019 --rc geninfo_unexecuted_blocks=1 00:04:30.019 00:04:30.019 ' 00:04:30.019 06:34:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:30.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.019 --rc genhtml_branch_coverage=1 00:04:30.019 --rc genhtml_function_coverage=1 00:04:30.019 --rc genhtml_legend=1 00:04:30.019 --rc geninfo_all_blocks=1 00:04:30.019 --rc geninfo_unexecuted_blocks=1 00:04:30.019 00:04:30.019 ' 00:04:30.019 06:34:43 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:30.019 OK 00:04:30.019 06:34:43 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:30.019 00:04:30.019 real 0m0.203s 00:04:30.019 user 0m0.131s 00:04:30.019 sys 0m0.078s 00:04:30.019 06:34:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:30.019 06:34:43 -- common/autotest_common.sh@10 -- # set +x 00:04:30.019 ************************************ 00:04:30.019 END TEST rpc_client 00:04:30.019 ************************************ 00:04:30.019 06:34:43 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:30.019 06:34:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:30.019 06:34:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:30.019 06:34:43 -- common/autotest_common.sh@10 -- # set +x 00:04:30.019 ************************************ 00:04:30.019 START TEST json_config 00:04:30.019 ************************************ 00:04:30.019 06:34:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:30.019 06:34:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:30.019 06:34:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:30.019 06:34:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:30.019 06:34:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:30.019 06:34:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:30.019 06:34:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:30.019 06:34:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:30.019 06:34:43 -- scripts/common.sh@335 -- # IFS=.-: 00:04:30.019 06:34:43 -- scripts/common.sh@335 -- # read -ra ver1 00:04:30.019 06:34:43 -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.019 06:34:43 -- scripts/common.sh@336 -- # read -ra ver2 00:04:30.019 06:34:43 -- scripts/common.sh@337 -- # local 'op=<' 00:04:30.019 06:34:43 -- scripts/common.sh@339 -- # ver1_l=2 00:04:30.019 06:34:43 -- scripts/common.sh@340 -- # ver2_l=1 00:04:30.019 06:34:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:30.019 06:34:43 -- scripts/common.sh@343 -- # case "$op" in 00:04:30.019 06:34:43 -- scripts/common.sh@344 -- # : 1 00:04:30.019 06:34:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:30.019 06:34:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.019 06:34:43 -- scripts/common.sh@364 -- # decimal 1 00:04:30.019 06:34:43 -- scripts/common.sh@352 -- # local d=1 00:04:30.019 06:34:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.019 06:34:43 -- scripts/common.sh@354 -- # echo 1 00:04:30.019 06:34:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:30.019 06:34:43 -- scripts/common.sh@365 -- # decimal 2 00:04:30.019 06:34:43 -- scripts/common.sh@352 -- # local d=2 00:04:30.019 06:34:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.019 06:34:44 -- scripts/common.sh@354 -- # echo 2 00:04:30.019 06:34:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:30.019 06:34:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:30.019 06:34:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:30.019 06:34:44 -- scripts/common.sh@367 -- # return 0 00:04:30.019 06:34:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.019 06:34:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:30.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.019 --rc genhtml_branch_coverage=1 00:04:30.019 --rc genhtml_function_coverage=1 00:04:30.019 --rc genhtml_legend=1 00:04:30.019 --rc geninfo_all_blocks=1 00:04:30.019 --rc geninfo_unexecuted_blocks=1 00:04:30.019 00:04:30.019 ' 00:04:30.019 06:34:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:30.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.019 --rc genhtml_branch_coverage=1 00:04:30.019 --rc genhtml_function_coverage=1 00:04:30.019 --rc genhtml_legend=1 00:04:30.019 --rc geninfo_all_blocks=1 00:04:30.019 --rc geninfo_unexecuted_blocks=1 00:04:30.019 00:04:30.019 ' 00:04:30.019 06:34:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:30.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.019 --rc genhtml_branch_coverage=1 00:04:30.019 --rc genhtml_function_coverage=1 00:04:30.019 --rc genhtml_legend=1 00:04:30.019 --rc geninfo_all_blocks=1 00:04:30.019 --rc geninfo_unexecuted_blocks=1 00:04:30.019 00:04:30.019 ' 00:04:30.019 06:34:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:30.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.019 --rc genhtml_branch_coverage=1 00:04:30.019 --rc genhtml_function_coverage=1 00:04:30.019 --rc genhtml_legend=1 00:04:30.019 --rc geninfo_all_blocks=1 00:04:30.019 --rc geninfo_unexecuted_blocks=1 00:04:30.019 00:04:30.019 ' 00:04:30.019 06:34:44 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:30.278 06:34:44 -- nvmf/common.sh@7 -- # uname -s 00:04:30.278 06:34:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:30.278 06:34:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:30.278 06:34:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:30.278 06:34:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:30.278 06:34:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:30.278 06:34:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:30.278 06:34:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:30.278 06:34:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:30.278 06:34:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:30.278 06:34:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:30.278 06:34:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 00:04:30.278 06:34:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=1897a557-42a7-4044-982a-fbab8b2b3e32 00:04:30.278 06:34:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:30.278 06:34:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:30.278 06:34:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:30.278 06:34:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:30.278 06:34:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:30.278 06:34:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:30.278 06:34:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:30.278 06:34:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.278 06:34:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.278 06:34:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.278 06:34:44 -- paths/export.sh@5 -- # export PATH 00:04:30.278 06:34:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.278 06:34:44 -- nvmf/common.sh@46 -- # : 0 00:04:30.278 06:34:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:30.278 06:34:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:30.278 06:34:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:30.278 06:34:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:30.278 06:34:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:30.278 06:34:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:30.278 06:34:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:30.278 06:34:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:30.278 06:34:44 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:30.278 06:34:44 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:30.278 06:34:44 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:30.278 06:34:44 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:30.278 06:34:44 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:04:30.278 06:34:44 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:04:30.278 06:34:44 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:30.279 06:34:44 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:04:30.279 06:34:44 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:30.279 06:34:44 -- json_config/json_config.sh@32 -- # declare -A app_params 00:04:30.279 06:34:44 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:30.279 06:34:44 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:04:30.279 06:34:44 -- json_config/json_config.sh@43 -- # last_event_id=0 00:04:30.279 06:34:44 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:30.279 06:34:44 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:04:30.279 INFO: JSON configuration test init 00:04:30.279 06:34:44 -- json_config/json_config.sh@420 -- # json_config_test_init 00:04:30.279 06:34:44 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:04:30.279 06:34:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:30.279 06:34:44 -- common/autotest_common.sh@10 -- # set +x 00:04:30.279 06:34:44 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:04:30.279 06:34:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:30.279 06:34:44 -- common/autotest_common.sh@10 -- # set +x 00:04:30.279 06:34:44 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:04:30.279 06:34:44 -- json_config/json_config.sh@98 -- # local app=target 00:04:30.279 06:34:44 -- json_config/json_config.sh@99 -- # shift 00:04:30.279 06:34:44 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:30.279 Waiting for target to run... 00:04:30.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:30.279 06:34:44 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:30.279 06:34:44 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:30.279 06:34:44 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:30.279 06:34:44 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:30.279 06:34:44 -- json_config/json_config.sh@111 -- # app_pid[$app]=54064 00:04:30.279 06:34:44 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:30.279 06:34:44 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:30.279 06:34:44 -- json_config/json_config.sh@114 -- # waitforlisten 54064 /var/tmp/spdk_tgt.sock 00:04:30.279 06:34:44 -- common/autotest_common.sh@829 -- # '[' -z 54064 ']' 00:04:30.279 06:34:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:30.279 06:34:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:30.279 06:34:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:30.279 06:34:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:30.279 06:34:44 -- common/autotest_common.sh@10 -- # set +x 00:04:30.279 [2024-12-14 06:34:44.114725] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:30.279 [2024-12-14 06:34:44.115003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54064 ] 00:04:30.537 [2024-12-14 06:34:44.410930] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.537 [2024-12-14 06:34:44.454699] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:30.537 [2024-12-14 06:34:44.455103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.473 06:34:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:31.473 06:34:45 -- common/autotest_common.sh@862 -- # return 0 00:04:31.473 06:34:45 -- json_config/json_config.sh@115 -- # echo '' 00:04:31.473 00:04:31.473 06:34:45 -- json_config/json_config.sh@322 -- # create_accel_config 00:04:31.473 06:34:45 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:04:31.473 06:34:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:31.473 06:34:45 -- common/autotest_common.sh@10 -- # set +x 00:04:31.473 06:34:45 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:04:31.473 06:34:45 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:04:31.473 06:34:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:31.473 06:34:45 -- common/autotest_common.sh@10 -- # set +x 00:04:31.473 06:34:45 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:31.473 06:34:45 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:04:31.473 06:34:45 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:31.732 06:34:45 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:04:31.732 06:34:45 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:04:31.732 06:34:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:31.732 06:34:45 -- common/autotest_common.sh@10 -- # set +x 00:04:31.732 06:34:45 -- json_config/json_config.sh@48 -- # local ret=0 00:04:31.732 06:34:45 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:31.732 06:34:45 -- json_config/json_config.sh@49 -- # local enabled_types 00:04:31.732 06:34:45 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:31.732 06:34:45 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:31.732 06:34:45 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:31.991 06:34:45 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:31.991 06:34:45 -- json_config/json_config.sh@51 -- # local get_types 00:04:31.991 06:34:45 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:31.991 06:34:45 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:04:31.991 06:34:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:31.991 06:34:45 -- common/autotest_common.sh@10 -- # set +x 00:04:31.991 06:34:45 -- json_config/json_config.sh@58 -- # return 0 00:04:31.991 06:34:45 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:04:31.991 06:34:45 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:04:31.991 06:34:45 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:04:31.991 06:34:45 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:04:31.991 06:34:45 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:04:31.991 06:34:45 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:04:31.991 06:34:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:31.991 06:34:45 -- common/autotest_common.sh@10 -- # set +x 00:04:31.991 06:34:45 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:31.991 06:34:45 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:04:31.991 06:34:45 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:04:31.991 06:34:45 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:31.991 06:34:45 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:32.250 MallocForNvmf0 00:04:32.250 06:34:46 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:32.250 06:34:46 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:32.508 MallocForNvmf1 00:04:32.508 06:34:46 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:32.508 06:34:46 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:32.768 [2024-12-14 06:34:46.562515] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:32.768 06:34:46 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:32.768 06:34:46 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:33.027 06:34:46 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:33.027 06:34:46 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:33.027 06:34:47 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:33.027 06:34:47 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:33.286 06:34:47 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:33.286 06:34:47 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:33.545 [2024-12-14 06:34:47.427045] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:33.545 06:34:47 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:04:33.545 06:34:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:33.545 06:34:47 -- common/autotest_common.sh@10 -- # set +x 00:04:33.545 06:34:47 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:04:33.545 06:34:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:33.545 06:34:47 -- common/autotest_common.sh@10 -- # set +x 00:04:33.545 06:34:47 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:04:33.545 06:34:47 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:33.545 06:34:47 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:33.805 MallocBdevForConfigChangeCheck 00:04:34.065 06:34:47 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:04:34.065 06:34:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:34.065 06:34:47 -- common/autotest_common.sh@10 -- # set +x 00:04:34.065 06:34:47 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:04:34.065 06:34:47 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:34.326 INFO: shutting down applications... 00:04:34.326 06:34:48 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:04:34.326 06:34:48 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:04:34.326 06:34:48 -- json_config/json_config.sh@431 -- # json_config_clear target 00:04:34.326 06:34:48 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:04:34.326 06:34:48 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:34.586 Calling clear_iscsi_subsystem 00:04:34.586 Calling clear_nvmf_subsystem 00:04:34.586 Calling clear_nbd_subsystem 00:04:34.586 Calling clear_ublk_subsystem 00:04:34.586 Calling clear_vhost_blk_subsystem 00:04:34.586 Calling clear_vhost_scsi_subsystem 00:04:34.586 Calling clear_scheduler_subsystem 00:04:34.586 Calling clear_bdev_subsystem 00:04:34.586 Calling clear_accel_subsystem 00:04:34.586 Calling clear_vmd_subsystem 00:04:34.586 Calling clear_sock_subsystem 00:04:34.586 Calling clear_iobuf_subsystem 00:04:34.586 06:34:48 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:34.586 06:34:48 -- json_config/json_config.sh@396 -- # count=100 00:04:34.586 06:34:48 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:04:34.586 06:34:48 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:34.586 06:34:48 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:34.586 06:34:48 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:35.155 06:34:48 -- json_config/json_config.sh@398 -- # break 00:04:35.155 06:34:48 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:04:35.155 06:34:48 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:04:35.155 06:34:48 -- json_config/json_config.sh@120 -- # local app=target 00:04:35.155 06:34:48 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:04:35.155 06:34:48 -- json_config/json_config.sh@124 -- # [[ -n 54064 ]] 00:04:35.155 06:34:48 -- json_config/json_config.sh@127 -- # kill -SIGINT 54064 00:04:35.155 06:34:48 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:04:35.155 06:34:48 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:35.155 06:34:48 -- json_config/json_config.sh@130 -- # kill -0 54064 00:04:35.155 06:34:48 -- json_config/json_config.sh@134 -- # sleep 0.5 00:04:35.725 06:34:49 -- json_config/json_config.sh@129 -- # (( i++ )) 00:04:35.725 06:34:49 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:35.725 SPDK target shutdown done 00:04:35.725 INFO: relaunching applications... 00:04:35.725 06:34:49 -- json_config/json_config.sh@130 -- # kill -0 54064 00:04:35.725 06:34:49 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:04:35.725 06:34:49 -- json_config/json_config.sh@132 -- # break 00:04:35.725 06:34:49 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:04:35.725 06:34:49 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:04:35.725 06:34:49 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:04:35.725 06:34:49 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:35.725 06:34:49 -- json_config/json_config.sh@98 -- # local app=target 00:04:35.725 06:34:49 -- json_config/json_config.sh@99 -- # shift 00:04:35.725 06:34:49 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:35.725 06:34:49 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:35.725 06:34:49 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:35.725 06:34:49 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:35.725 06:34:49 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:35.725 06:34:49 -- json_config/json_config.sh@111 -- # app_pid[$app]=54255 00:04:35.725 06:34:49 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:35.725 06:34:49 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:35.725 Waiting for target to run... 00:04:35.725 06:34:49 -- json_config/json_config.sh@114 -- # waitforlisten 54255 /var/tmp/spdk_tgt.sock 00:04:35.725 06:34:49 -- common/autotest_common.sh@829 -- # '[' -z 54255 ']' 00:04:35.725 06:34:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:35.725 06:34:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:35.725 06:34:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:35.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:35.725 06:34:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:35.725 06:34:49 -- common/autotest_common.sh@10 -- # set +x 00:04:35.725 [2024-12-14 06:34:49.546639] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:35.725 [2024-12-14 06:34:49.546743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54255 ] 00:04:35.985 [2024-12-14 06:34:49.847792] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.985 [2024-12-14 06:34:49.886774] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:35.985 [2024-12-14 06:34:49.886983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.245 [2024-12-14 06:34:50.184020] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:36.245 [2024-12-14 06:34:50.216107] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:36.505 06:34:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:36.505 06:34:50 -- common/autotest_common.sh@862 -- # return 0 00:04:36.505 00:04:36.505 INFO: Checking if target configuration is the same... 00:04:36.505 06:34:50 -- json_config/json_config.sh@115 -- # echo '' 00:04:36.505 06:34:50 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:04:36.505 06:34:50 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:36.505 06:34:50 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:36.505 06:34:50 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:04:36.505 06:34:50 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:36.764 + '[' 2 -ne 2 ']' 00:04:36.765 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:36.765 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:36.765 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:36.765 +++ basename /dev/fd/62 00:04:36.765 ++ mktemp /tmp/62.XXX 00:04:36.765 + tmp_file_1=/tmp/62.K9N 00:04:36.765 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:36.765 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:36.765 + tmp_file_2=/tmp/spdk_tgt_config.json.uSn 00:04:36.765 + ret=0 00:04:36.765 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:37.024 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:37.024 + diff -u /tmp/62.K9N /tmp/spdk_tgt_config.json.uSn 00:04:37.024 INFO: JSON config files are the same 00:04:37.024 + echo 'INFO: JSON config files are the same' 00:04:37.024 + rm /tmp/62.K9N /tmp/spdk_tgt_config.json.uSn 00:04:37.024 + exit 0 00:04:37.024 INFO: changing configuration and checking if this can be detected... 00:04:37.024 06:34:50 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:04:37.024 06:34:50 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:37.024 06:34:50 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:37.024 06:34:50 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:37.283 06:34:51 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:37.283 06:34:51 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:04:37.283 06:34:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:37.283 + '[' 2 -ne 2 ']' 00:04:37.283 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:37.283 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:37.283 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:37.283 +++ basename /dev/fd/62 00:04:37.283 ++ mktemp /tmp/62.XXX 00:04:37.283 + tmp_file_1=/tmp/62.Vku 00:04:37.283 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:37.283 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:37.283 + tmp_file_2=/tmp/spdk_tgt_config.json.nH8 00:04:37.283 + ret=0 00:04:37.283 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:37.542 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:37.801 + diff -u /tmp/62.Vku /tmp/spdk_tgt_config.json.nH8 00:04:37.801 + ret=1 00:04:37.801 + echo '=== Start of file: /tmp/62.Vku ===' 00:04:37.801 + cat /tmp/62.Vku 00:04:37.801 + echo '=== End of file: /tmp/62.Vku ===' 00:04:37.801 + echo '' 00:04:37.801 + echo '=== Start of file: /tmp/spdk_tgt_config.json.nH8 ===' 00:04:37.801 + cat /tmp/spdk_tgt_config.json.nH8 00:04:37.801 + echo '=== End of file: /tmp/spdk_tgt_config.json.nH8 ===' 00:04:37.801 + echo '' 00:04:37.801 + rm /tmp/62.Vku /tmp/spdk_tgt_config.json.nH8 00:04:37.801 + exit 1 00:04:37.801 INFO: configuration change detected. 00:04:37.801 06:34:51 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:04:37.801 06:34:51 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:04:37.801 06:34:51 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:04:37.801 06:34:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:37.801 06:34:51 -- common/autotest_common.sh@10 -- # set +x 00:04:37.801 06:34:51 -- json_config/json_config.sh@360 -- # local ret=0 00:04:37.801 06:34:51 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:04:37.801 06:34:51 -- json_config/json_config.sh@370 -- # [[ -n 54255 ]] 00:04:37.801 06:34:51 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:04:37.801 06:34:51 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:04:37.801 06:34:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:37.801 06:34:51 -- common/autotest_common.sh@10 -- # set +x 00:04:37.801 06:34:51 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:04:37.801 06:34:51 -- json_config/json_config.sh@246 -- # uname -s 00:04:37.801 06:34:51 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:04:37.801 06:34:51 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:04:37.801 06:34:51 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:04:37.801 06:34:51 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:04:37.801 06:34:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:37.801 06:34:51 -- common/autotest_common.sh@10 -- # set +x 00:04:37.801 06:34:51 -- json_config/json_config.sh@376 -- # killprocess 54255 00:04:37.801 06:34:51 -- common/autotest_common.sh@936 -- # '[' -z 54255 ']' 00:04:37.801 06:34:51 -- common/autotest_common.sh@940 -- # kill -0 54255 00:04:37.801 06:34:51 -- common/autotest_common.sh@941 -- # uname 00:04:37.801 06:34:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:37.801 06:34:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54255 00:04:37.801 killing process with pid 54255 00:04:37.801 06:34:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:37.801 06:34:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:37.801 06:34:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54255' 00:04:37.801 06:34:51 -- common/autotest_common.sh@955 -- # kill 54255 00:04:37.801 06:34:51 -- common/autotest_common.sh@960 -- # wait 54255 00:04:38.061 06:34:51 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:38.061 06:34:51 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:04:38.061 06:34:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:38.061 06:34:51 -- common/autotest_common.sh@10 -- # set +x 00:04:38.061 06:34:51 -- json_config/json_config.sh@381 -- # return 0 00:04:38.061 06:34:51 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:04:38.061 INFO: Success 00:04:38.061 ************************************ 00:04:38.061 END TEST json_config 00:04:38.061 ************************************ 00:04:38.061 00:04:38.061 real 0m8.046s 00:04:38.061 user 0m11.629s 00:04:38.061 sys 0m1.329s 00:04:38.061 06:34:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:38.061 06:34:51 -- common/autotest_common.sh@10 -- # set +x 00:04:38.061 06:34:51 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:38.061 06:34:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:38.061 06:34:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:38.061 06:34:51 -- common/autotest_common.sh@10 -- # set +x 00:04:38.061 ************************************ 00:04:38.061 START TEST json_config_extra_key 00:04:38.061 ************************************ 00:04:38.061 06:34:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:38.061 06:34:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:38.061 06:34:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:38.061 06:34:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:38.321 06:34:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:38.321 06:34:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:38.321 06:34:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:38.321 06:34:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:38.321 06:34:52 -- scripts/common.sh@335 -- # IFS=.-: 00:04:38.321 06:34:52 -- scripts/common.sh@335 -- # read -ra ver1 00:04:38.321 06:34:52 -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.321 06:34:52 -- scripts/common.sh@336 -- # read -ra ver2 00:04:38.321 06:34:52 -- scripts/common.sh@337 -- # local 'op=<' 00:04:38.321 06:34:52 -- scripts/common.sh@339 -- # ver1_l=2 00:04:38.321 06:34:52 -- scripts/common.sh@340 -- # ver2_l=1 00:04:38.321 06:34:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:38.321 06:34:52 -- scripts/common.sh@343 -- # case "$op" in 00:04:38.321 06:34:52 -- scripts/common.sh@344 -- # : 1 00:04:38.321 06:34:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:38.321 06:34:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.321 06:34:52 -- scripts/common.sh@364 -- # decimal 1 00:04:38.321 06:34:52 -- scripts/common.sh@352 -- # local d=1 00:04:38.321 06:34:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.321 06:34:52 -- scripts/common.sh@354 -- # echo 1 00:04:38.321 06:34:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:38.321 06:34:52 -- scripts/common.sh@365 -- # decimal 2 00:04:38.321 06:34:52 -- scripts/common.sh@352 -- # local d=2 00:04:38.321 06:34:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.321 06:34:52 -- scripts/common.sh@354 -- # echo 2 00:04:38.321 06:34:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:38.321 06:34:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:38.321 06:34:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:38.321 06:34:52 -- scripts/common.sh@367 -- # return 0 00:04:38.321 06:34:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.321 06:34:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:38.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.321 --rc genhtml_branch_coverage=1 00:04:38.321 --rc genhtml_function_coverage=1 00:04:38.321 --rc genhtml_legend=1 00:04:38.321 --rc geninfo_all_blocks=1 00:04:38.321 --rc geninfo_unexecuted_blocks=1 00:04:38.321 00:04:38.321 ' 00:04:38.321 06:34:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:38.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.321 --rc genhtml_branch_coverage=1 00:04:38.321 --rc genhtml_function_coverage=1 00:04:38.321 --rc genhtml_legend=1 00:04:38.321 --rc geninfo_all_blocks=1 00:04:38.321 --rc geninfo_unexecuted_blocks=1 00:04:38.321 00:04:38.321 ' 00:04:38.321 06:34:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:38.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.321 --rc genhtml_branch_coverage=1 00:04:38.321 --rc genhtml_function_coverage=1 00:04:38.321 --rc genhtml_legend=1 00:04:38.321 --rc geninfo_all_blocks=1 00:04:38.321 --rc geninfo_unexecuted_blocks=1 00:04:38.321 00:04:38.321 ' 00:04:38.321 06:34:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:38.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.321 --rc genhtml_branch_coverage=1 00:04:38.321 --rc genhtml_function_coverage=1 00:04:38.321 --rc genhtml_legend=1 00:04:38.321 --rc geninfo_all_blocks=1 00:04:38.321 --rc geninfo_unexecuted_blocks=1 00:04:38.321 00:04:38.321 ' 00:04:38.321 06:34:52 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:38.321 06:34:52 -- nvmf/common.sh@7 -- # uname -s 00:04:38.321 06:34:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:38.321 06:34:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:38.321 06:34:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:38.321 06:34:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:38.321 06:34:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:38.321 06:34:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:38.321 06:34:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:38.321 06:34:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:38.321 06:34:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:38.321 06:34:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:38.321 06:34:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 00:04:38.321 06:34:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=1897a557-42a7-4044-982a-fbab8b2b3e32 00:04:38.321 06:34:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:38.321 06:34:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:38.321 06:34:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:38.321 06:34:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:38.321 06:34:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:38.321 06:34:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:38.321 06:34:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:38.321 06:34:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.321 06:34:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.322 06:34:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.322 06:34:52 -- paths/export.sh@5 -- # export PATH 00:04:38.322 06:34:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.322 06:34:52 -- nvmf/common.sh@46 -- # : 0 00:04:38.322 06:34:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:38.322 06:34:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:38.322 06:34:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:38.322 06:34:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:38.322 06:34:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:38.322 06:34:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:38.322 06:34:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:38.322 06:34:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:38.322 06:34:52 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:38.322 06:34:52 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:38.322 06:34:52 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:38.322 06:34:52 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:38.322 06:34:52 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:38.322 06:34:52 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:38.322 06:34:52 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:38.322 06:34:52 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:38.322 06:34:52 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:38.322 06:34:52 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:38.322 INFO: launching applications... 00:04:38.322 06:34:52 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:38.322 06:34:52 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:38.322 06:34:52 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:38.322 06:34:52 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:38.322 06:34:52 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:38.322 06:34:52 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=54397 00:04:38.322 06:34:52 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:38.322 Waiting for target to run... 00:04:38.322 06:34:52 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:38.322 06:34:52 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 54397 /var/tmp/spdk_tgt.sock 00:04:38.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:38.322 06:34:52 -- common/autotest_common.sh@829 -- # '[' -z 54397 ']' 00:04:38.322 06:34:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:38.322 06:34:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.322 06:34:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:38.322 06:34:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.322 06:34:52 -- common/autotest_common.sh@10 -- # set +x 00:04:38.322 [2024-12-14 06:34:52.191520] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:38.322 [2024-12-14 06:34:52.191600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54397 ] 00:04:38.582 [2024-12-14 06:34:52.491771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.582 [2024-12-14 06:34:52.528624] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:38.582 [2024-12-14 06:34:52.528784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.520 00:04:39.520 INFO: shutting down applications... 00:04:39.520 06:34:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:39.520 06:34:53 -- common/autotest_common.sh@862 -- # return 0 00:04:39.520 06:34:53 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:39.520 06:34:53 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:39.520 06:34:53 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:39.520 06:34:53 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:39.520 06:34:53 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:39.520 06:34:53 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 54397 ]] 00:04:39.520 06:34:53 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 54397 00:04:39.520 06:34:53 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:39.520 06:34:53 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:39.520 06:34:53 -- json_config/json_config_extra_key.sh@50 -- # kill -0 54397 00:04:39.520 06:34:53 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:39.780 06:34:53 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:39.780 06:34:53 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:39.780 06:34:53 -- json_config/json_config_extra_key.sh@50 -- # kill -0 54397 00:04:39.780 06:34:53 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:39.780 06:34:53 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:39.780 06:34:53 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:39.780 06:34:53 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:39.780 SPDK target shutdown done 00:04:39.780 06:34:53 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:39.780 Success 00:04:39.780 ************************************ 00:04:39.780 END TEST json_config_extra_key 00:04:39.780 ************************************ 00:04:39.780 00:04:39.780 real 0m1.765s 00:04:39.780 user 0m1.691s 00:04:39.780 sys 0m0.290s 00:04:39.780 06:34:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:39.780 06:34:53 -- common/autotest_common.sh@10 -- # set +x 00:04:39.780 06:34:53 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:39.780 06:34:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:39.780 06:34:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:39.780 06:34:53 -- common/autotest_common.sh@10 -- # set +x 00:04:40.040 ************************************ 00:04:40.040 START TEST alias_rpc 00:04:40.040 ************************************ 00:04:40.040 06:34:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:40.040 * Looking for test storage... 00:04:40.040 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:40.040 06:34:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:40.040 06:34:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:40.040 06:34:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:40.040 06:34:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:40.040 06:34:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:40.041 06:34:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:40.041 06:34:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:40.041 06:34:53 -- scripts/common.sh@335 -- # IFS=.-: 00:04:40.041 06:34:53 -- scripts/common.sh@335 -- # read -ra ver1 00:04:40.041 06:34:53 -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.041 06:34:53 -- scripts/common.sh@336 -- # read -ra ver2 00:04:40.041 06:34:53 -- scripts/common.sh@337 -- # local 'op=<' 00:04:40.041 06:34:53 -- scripts/common.sh@339 -- # ver1_l=2 00:04:40.041 06:34:53 -- scripts/common.sh@340 -- # ver2_l=1 00:04:40.041 06:34:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:40.041 06:34:53 -- scripts/common.sh@343 -- # case "$op" in 00:04:40.041 06:34:53 -- scripts/common.sh@344 -- # : 1 00:04:40.041 06:34:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:40.041 06:34:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.041 06:34:53 -- scripts/common.sh@364 -- # decimal 1 00:04:40.041 06:34:53 -- scripts/common.sh@352 -- # local d=1 00:04:40.041 06:34:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.041 06:34:53 -- scripts/common.sh@354 -- # echo 1 00:04:40.041 06:34:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:40.041 06:34:53 -- scripts/common.sh@365 -- # decimal 2 00:04:40.041 06:34:53 -- scripts/common.sh@352 -- # local d=2 00:04:40.041 06:34:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.041 06:34:53 -- scripts/common.sh@354 -- # echo 2 00:04:40.041 06:34:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:40.041 06:34:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:40.041 06:34:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:40.041 06:34:53 -- scripts/common.sh@367 -- # return 0 00:04:40.041 06:34:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.041 06:34:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:40.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.041 --rc genhtml_branch_coverage=1 00:04:40.041 --rc genhtml_function_coverage=1 00:04:40.041 --rc genhtml_legend=1 00:04:40.041 --rc geninfo_all_blocks=1 00:04:40.041 --rc geninfo_unexecuted_blocks=1 00:04:40.041 00:04:40.041 ' 00:04:40.041 06:34:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:40.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.041 --rc genhtml_branch_coverage=1 00:04:40.041 --rc genhtml_function_coverage=1 00:04:40.041 --rc genhtml_legend=1 00:04:40.041 --rc geninfo_all_blocks=1 00:04:40.041 --rc geninfo_unexecuted_blocks=1 00:04:40.041 00:04:40.041 ' 00:04:40.041 06:34:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:40.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.041 --rc genhtml_branch_coverage=1 00:04:40.041 --rc genhtml_function_coverage=1 00:04:40.041 --rc genhtml_legend=1 00:04:40.041 --rc geninfo_all_blocks=1 00:04:40.041 --rc geninfo_unexecuted_blocks=1 00:04:40.041 00:04:40.041 ' 00:04:40.041 06:34:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:40.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.041 --rc genhtml_branch_coverage=1 00:04:40.041 --rc genhtml_function_coverage=1 00:04:40.041 --rc genhtml_legend=1 00:04:40.041 --rc geninfo_all_blocks=1 00:04:40.041 --rc geninfo_unexecuted_blocks=1 00:04:40.041 00:04:40.041 ' 00:04:40.041 06:34:53 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:40.041 06:34:53 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=54474 00:04:40.041 06:34:53 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.041 06:34:53 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 54474 00:04:40.041 06:34:53 -- common/autotest_common.sh@829 -- # '[' -z 54474 ']' 00:04:40.041 06:34:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.041 06:34:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:40.041 06:34:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.041 06:34:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:40.041 06:34:53 -- common/autotest_common.sh@10 -- # set +x 00:04:40.041 [2024-12-14 06:34:54.011403] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:40.041 [2024-12-14 06:34:54.011732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54474 ] 00:04:40.301 [2024-12-14 06:34:54.151406] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.301 [2024-12-14 06:34:54.205047] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:40.301 [2024-12-14 06:34:54.205456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.241 06:34:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:41.241 06:34:54 -- common/autotest_common.sh@862 -- # return 0 00:04:41.241 06:34:54 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:41.501 06:34:55 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 54474 00:04:41.501 06:34:55 -- common/autotest_common.sh@936 -- # '[' -z 54474 ']' 00:04:41.501 06:34:55 -- common/autotest_common.sh@940 -- # kill -0 54474 00:04:41.501 06:34:55 -- common/autotest_common.sh@941 -- # uname 00:04:41.501 06:34:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:41.501 06:34:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54474 00:04:41.501 killing process with pid 54474 00:04:41.501 06:34:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:41.501 06:34:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:41.501 06:34:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54474' 00:04:41.501 06:34:55 -- common/autotest_common.sh@955 -- # kill 54474 00:04:41.501 06:34:55 -- common/autotest_common.sh@960 -- # wait 54474 00:04:41.762 ************************************ 00:04:41.762 END TEST alias_rpc 00:04:41.762 ************************************ 00:04:41.762 00:04:41.762 real 0m1.775s 00:04:41.762 user 0m2.077s 00:04:41.762 sys 0m0.358s 00:04:41.762 06:34:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:41.762 06:34:55 -- common/autotest_common.sh@10 -- # set +x 00:04:41.762 06:34:55 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:04:41.762 06:34:55 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:41.762 06:34:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.762 06:34:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.762 06:34:55 -- common/autotest_common.sh@10 -- # set +x 00:04:41.762 ************************************ 00:04:41.762 START TEST spdkcli_tcp 00:04:41.762 ************************************ 00:04:41.762 06:34:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:41.762 * Looking for test storage... 00:04:41.762 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:41.762 06:34:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:41.762 06:34:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:41.762 06:34:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:42.022 06:34:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:42.023 06:34:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:42.023 06:34:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:42.023 06:34:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:42.023 06:34:55 -- scripts/common.sh@335 -- # IFS=.-: 00:04:42.023 06:34:55 -- scripts/common.sh@335 -- # read -ra ver1 00:04:42.023 06:34:55 -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.023 06:34:55 -- scripts/common.sh@336 -- # read -ra ver2 00:04:42.023 06:34:55 -- scripts/common.sh@337 -- # local 'op=<' 00:04:42.023 06:34:55 -- scripts/common.sh@339 -- # ver1_l=2 00:04:42.023 06:34:55 -- scripts/common.sh@340 -- # ver2_l=1 00:04:42.023 06:34:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:42.023 06:34:55 -- scripts/common.sh@343 -- # case "$op" in 00:04:42.023 06:34:55 -- scripts/common.sh@344 -- # : 1 00:04:42.023 06:34:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:42.023 06:34:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.023 06:34:55 -- scripts/common.sh@364 -- # decimal 1 00:04:42.023 06:34:55 -- scripts/common.sh@352 -- # local d=1 00:04:42.023 06:34:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.023 06:34:55 -- scripts/common.sh@354 -- # echo 1 00:04:42.023 06:34:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:42.023 06:34:55 -- scripts/common.sh@365 -- # decimal 2 00:04:42.023 06:34:55 -- scripts/common.sh@352 -- # local d=2 00:04:42.023 06:34:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.023 06:34:55 -- scripts/common.sh@354 -- # echo 2 00:04:42.023 06:34:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:42.023 06:34:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:42.023 06:34:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:42.023 06:34:55 -- scripts/common.sh@367 -- # return 0 00:04:42.023 06:34:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.023 06:34:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:42.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.023 --rc genhtml_branch_coverage=1 00:04:42.023 --rc genhtml_function_coverage=1 00:04:42.023 --rc genhtml_legend=1 00:04:42.023 --rc geninfo_all_blocks=1 00:04:42.023 --rc geninfo_unexecuted_blocks=1 00:04:42.023 00:04:42.023 ' 00:04:42.023 06:34:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:42.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.023 --rc genhtml_branch_coverage=1 00:04:42.023 --rc genhtml_function_coverage=1 00:04:42.023 --rc genhtml_legend=1 00:04:42.023 --rc geninfo_all_blocks=1 00:04:42.023 --rc geninfo_unexecuted_blocks=1 00:04:42.023 00:04:42.023 ' 00:04:42.023 06:34:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:42.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.023 --rc genhtml_branch_coverage=1 00:04:42.023 --rc genhtml_function_coverage=1 00:04:42.023 --rc genhtml_legend=1 00:04:42.023 --rc geninfo_all_blocks=1 00:04:42.023 --rc geninfo_unexecuted_blocks=1 00:04:42.023 00:04:42.023 ' 00:04:42.023 06:34:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:42.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.023 --rc genhtml_branch_coverage=1 00:04:42.023 --rc genhtml_function_coverage=1 00:04:42.023 --rc genhtml_legend=1 00:04:42.023 --rc geninfo_all_blocks=1 00:04:42.023 --rc geninfo_unexecuted_blocks=1 00:04:42.023 00:04:42.023 ' 00:04:42.023 06:34:55 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:42.023 06:34:55 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:42.023 06:34:55 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:42.023 06:34:55 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:42.023 06:34:55 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:42.023 06:34:55 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:42.023 06:34:55 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:42.023 06:34:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:42.023 06:34:55 -- common/autotest_common.sh@10 -- # set +x 00:04:42.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.023 06:34:55 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=54557 00:04:42.023 06:34:55 -- spdkcli/tcp.sh@27 -- # waitforlisten 54557 00:04:42.023 06:34:55 -- common/autotest_common.sh@829 -- # '[' -z 54557 ']' 00:04:42.023 06:34:55 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:42.023 06:34:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.023 06:34:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:42.023 06:34:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.023 06:34:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:42.023 06:34:55 -- common/autotest_common.sh@10 -- # set +x 00:04:42.023 [2024-12-14 06:34:55.823938] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:42.023 [2024-12-14 06:34:55.824038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54557 ] 00:04:42.023 [2024-12-14 06:34:55.951325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:42.023 [2024-12-14 06:34:56.003706] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:42.023 [2024-12-14 06:34:56.004177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.023 [2024-12-14 06:34:56.004189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.962 06:34:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:42.962 06:34:56 -- common/autotest_common.sh@862 -- # return 0 00:04:42.962 06:34:56 -- spdkcli/tcp.sh@31 -- # socat_pid=54574 00:04:42.962 06:34:56 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:42.962 06:34:56 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:43.223 [ 00:04:43.223 "bdev_malloc_delete", 00:04:43.223 "bdev_malloc_create", 00:04:43.223 "bdev_null_resize", 00:04:43.223 "bdev_null_delete", 00:04:43.223 "bdev_null_create", 00:04:43.223 "bdev_nvme_cuse_unregister", 00:04:43.223 "bdev_nvme_cuse_register", 00:04:43.223 "bdev_opal_new_user", 00:04:43.223 "bdev_opal_set_lock_state", 00:04:43.223 "bdev_opal_delete", 00:04:43.223 "bdev_opal_get_info", 00:04:43.223 "bdev_opal_create", 00:04:43.223 "bdev_nvme_opal_revert", 00:04:43.223 "bdev_nvme_opal_init", 00:04:43.223 "bdev_nvme_send_cmd", 00:04:43.223 "bdev_nvme_get_path_iostat", 00:04:43.223 "bdev_nvme_get_mdns_discovery_info", 00:04:43.223 "bdev_nvme_stop_mdns_discovery", 00:04:43.223 "bdev_nvme_start_mdns_discovery", 00:04:43.223 "bdev_nvme_set_multipath_policy", 00:04:43.223 "bdev_nvme_set_preferred_path", 00:04:43.223 "bdev_nvme_get_io_paths", 00:04:43.223 "bdev_nvme_remove_error_injection", 00:04:43.223 "bdev_nvme_add_error_injection", 00:04:43.223 "bdev_nvme_get_discovery_info", 00:04:43.223 "bdev_nvme_stop_discovery", 00:04:43.223 "bdev_nvme_start_discovery", 00:04:43.223 "bdev_nvme_get_controller_health_info", 00:04:43.223 "bdev_nvme_disable_controller", 00:04:43.223 "bdev_nvme_enable_controller", 00:04:43.223 "bdev_nvme_reset_controller", 00:04:43.223 "bdev_nvme_get_transport_statistics", 00:04:43.223 "bdev_nvme_apply_firmware", 00:04:43.223 "bdev_nvme_detach_controller", 00:04:43.223 "bdev_nvme_get_controllers", 00:04:43.223 "bdev_nvme_attach_controller", 00:04:43.223 "bdev_nvme_set_hotplug", 00:04:43.223 "bdev_nvme_set_options", 00:04:43.223 "bdev_passthru_delete", 00:04:43.223 "bdev_passthru_create", 00:04:43.223 "bdev_lvol_grow_lvstore", 00:04:43.223 "bdev_lvol_get_lvols", 00:04:43.223 "bdev_lvol_get_lvstores", 00:04:43.223 "bdev_lvol_delete", 00:04:43.223 "bdev_lvol_set_read_only", 00:04:43.223 "bdev_lvol_resize", 00:04:43.223 "bdev_lvol_decouple_parent", 00:04:43.223 "bdev_lvol_inflate", 00:04:43.223 "bdev_lvol_rename", 00:04:43.223 "bdev_lvol_clone_bdev", 00:04:43.223 "bdev_lvol_clone", 00:04:43.223 "bdev_lvol_snapshot", 00:04:43.223 "bdev_lvol_create", 00:04:43.223 "bdev_lvol_delete_lvstore", 00:04:43.223 "bdev_lvol_rename_lvstore", 00:04:43.223 "bdev_lvol_create_lvstore", 00:04:43.223 "bdev_raid_set_options", 00:04:43.223 "bdev_raid_remove_base_bdev", 00:04:43.223 "bdev_raid_add_base_bdev", 00:04:43.223 "bdev_raid_delete", 00:04:43.223 "bdev_raid_create", 00:04:43.223 "bdev_raid_get_bdevs", 00:04:43.223 "bdev_error_inject_error", 00:04:43.223 "bdev_error_delete", 00:04:43.223 "bdev_error_create", 00:04:43.223 "bdev_split_delete", 00:04:43.223 "bdev_split_create", 00:04:43.223 "bdev_delay_delete", 00:04:43.223 "bdev_delay_create", 00:04:43.223 "bdev_delay_update_latency", 00:04:43.223 "bdev_zone_block_delete", 00:04:43.223 "bdev_zone_block_create", 00:04:43.223 "blobfs_create", 00:04:43.223 "blobfs_detect", 00:04:43.223 "blobfs_set_cache_size", 00:04:43.223 "bdev_aio_delete", 00:04:43.223 "bdev_aio_rescan", 00:04:43.223 "bdev_aio_create", 00:04:43.223 "bdev_ftl_set_property", 00:04:43.223 "bdev_ftl_get_properties", 00:04:43.223 "bdev_ftl_get_stats", 00:04:43.223 "bdev_ftl_unmap", 00:04:43.223 "bdev_ftl_unload", 00:04:43.223 "bdev_ftl_delete", 00:04:43.223 "bdev_ftl_load", 00:04:43.223 "bdev_ftl_create", 00:04:43.223 "bdev_virtio_attach_controller", 00:04:43.223 "bdev_virtio_scsi_get_devices", 00:04:43.223 "bdev_virtio_detach_controller", 00:04:43.223 "bdev_virtio_blk_set_hotplug", 00:04:43.223 "bdev_iscsi_delete", 00:04:43.223 "bdev_iscsi_create", 00:04:43.223 "bdev_iscsi_set_options", 00:04:43.223 "bdev_uring_delete", 00:04:43.223 "bdev_uring_create", 00:04:43.223 "accel_error_inject_error", 00:04:43.223 "ioat_scan_accel_module", 00:04:43.223 "dsa_scan_accel_module", 00:04:43.223 "iaa_scan_accel_module", 00:04:43.223 "vfu_virtio_create_scsi_endpoint", 00:04:43.223 "vfu_virtio_scsi_remove_target", 00:04:43.223 "vfu_virtio_scsi_add_target", 00:04:43.223 "vfu_virtio_create_blk_endpoint", 00:04:43.223 "vfu_virtio_delete_endpoint", 00:04:43.223 "iscsi_set_options", 00:04:43.223 "iscsi_get_auth_groups", 00:04:43.223 "iscsi_auth_group_remove_secret", 00:04:43.223 "iscsi_auth_group_add_secret", 00:04:43.223 "iscsi_delete_auth_group", 00:04:43.223 "iscsi_create_auth_group", 00:04:43.223 "iscsi_set_discovery_auth", 00:04:43.223 "iscsi_get_options", 00:04:43.223 "iscsi_target_node_request_logout", 00:04:43.223 "iscsi_target_node_set_redirect", 00:04:43.223 "iscsi_target_node_set_auth", 00:04:43.223 "iscsi_target_node_add_lun", 00:04:43.223 "iscsi_get_connections", 00:04:43.223 "iscsi_portal_group_set_auth", 00:04:43.223 "iscsi_start_portal_group", 00:04:43.223 "iscsi_delete_portal_group", 00:04:43.223 "iscsi_create_portal_group", 00:04:43.223 "iscsi_get_portal_groups", 00:04:43.223 "iscsi_delete_target_node", 00:04:43.223 "iscsi_target_node_remove_pg_ig_maps", 00:04:43.223 "iscsi_target_node_add_pg_ig_maps", 00:04:43.223 "iscsi_create_target_node", 00:04:43.223 "iscsi_get_target_nodes", 00:04:43.223 "iscsi_delete_initiator_group", 00:04:43.223 "iscsi_initiator_group_remove_initiators", 00:04:43.223 "iscsi_initiator_group_add_initiators", 00:04:43.223 "iscsi_create_initiator_group", 00:04:43.223 "iscsi_get_initiator_groups", 00:04:43.223 "nvmf_set_crdt", 00:04:43.223 "nvmf_set_config", 00:04:43.223 "nvmf_set_max_subsystems", 00:04:43.223 "nvmf_subsystem_get_listeners", 00:04:43.223 "nvmf_subsystem_get_qpairs", 00:04:43.223 "nvmf_subsystem_get_controllers", 00:04:43.223 "nvmf_get_stats", 00:04:43.223 "nvmf_get_transports", 00:04:43.223 "nvmf_create_transport", 00:04:43.223 "nvmf_get_targets", 00:04:43.223 "nvmf_delete_target", 00:04:43.223 "nvmf_create_target", 00:04:43.223 "nvmf_subsystem_allow_any_host", 00:04:43.223 "nvmf_subsystem_remove_host", 00:04:43.223 "nvmf_subsystem_add_host", 00:04:43.223 "nvmf_subsystem_remove_ns", 00:04:43.223 "nvmf_subsystem_add_ns", 00:04:43.223 "nvmf_subsystem_listener_set_ana_state", 00:04:43.223 "nvmf_discovery_get_referrals", 00:04:43.223 "nvmf_discovery_remove_referral", 00:04:43.223 "nvmf_discovery_add_referral", 00:04:43.223 "nvmf_subsystem_remove_listener", 00:04:43.223 "nvmf_subsystem_add_listener", 00:04:43.223 "nvmf_delete_subsystem", 00:04:43.223 "nvmf_create_subsystem", 00:04:43.223 "nvmf_get_subsystems", 00:04:43.223 "env_dpdk_get_mem_stats", 00:04:43.223 "nbd_get_disks", 00:04:43.223 "nbd_stop_disk", 00:04:43.223 "nbd_start_disk", 00:04:43.223 "ublk_recover_disk", 00:04:43.223 "ublk_get_disks", 00:04:43.223 "ublk_stop_disk", 00:04:43.223 "ublk_start_disk", 00:04:43.223 "ublk_destroy_target", 00:04:43.223 "ublk_create_target", 00:04:43.223 "virtio_blk_create_transport", 00:04:43.223 "virtio_blk_get_transports", 00:04:43.223 "vhost_controller_set_coalescing", 00:04:43.223 "vhost_get_controllers", 00:04:43.223 "vhost_delete_controller", 00:04:43.223 "vhost_create_blk_controller", 00:04:43.223 "vhost_scsi_controller_remove_target", 00:04:43.223 "vhost_scsi_controller_add_target", 00:04:43.223 "vhost_start_scsi_controller", 00:04:43.223 "vhost_create_scsi_controller", 00:04:43.224 "thread_set_cpumask", 00:04:43.224 "framework_get_scheduler", 00:04:43.224 "framework_set_scheduler", 00:04:43.224 "framework_get_reactors", 00:04:43.224 "thread_get_io_channels", 00:04:43.224 "thread_get_pollers", 00:04:43.224 "thread_get_stats", 00:04:43.224 "framework_monitor_context_switch", 00:04:43.224 "spdk_kill_instance", 00:04:43.224 "log_enable_timestamps", 00:04:43.224 "log_get_flags", 00:04:43.224 "log_clear_flag", 00:04:43.224 "log_set_flag", 00:04:43.224 "log_get_level", 00:04:43.224 "log_set_level", 00:04:43.224 "log_get_print_level", 00:04:43.224 "log_set_print_level", 00:04:43.224 "framework_enable_cpumask_locks", 00:04:43.224 "framework_disable_cpumask_locks", 00:04:43.224 "framework_wait_init", 00:04:43.224 "framework_start_init", 00:04:43.224 "scsi_get_devices", 00:04:43.224 "bdev_get_histogram", 00:04:43.224 "bdev_enable_histogram", 00:04:43.224 "bdev_set_qos_limit", 00:04:43.224 "bdev_set_qd_sampling_period", 00:04:43.224 "bdev_get_bdevs", 00:04:43.224 "bdev_reset_iostat", 00:04:43.224 "bdev_get_iostat", 00:04:43.224 "bdev_examine", 00:04:43.224 "bdev_wait_for_examine", 00:04:43.224 "bdev_set_options", 00:04:43.224 "notify_get_notifications", 00:04:43.224 "notify_get_types", 00:04:43.224 "accel_get_stats", 00:04:43.224 "accel_set_options", 00:04:43.224 "accel_set_driver", 00:04:43.224 "accel_crypto_key_destroy", 00:04:43.224 "accel_crypto_keys_get", 00:04:43.224 "accel_crypto_key_create", 00:04:43.224 "accel_assign_opc", 00:04:43.224 "accel_get_module_info", 00:04:43.224 "accel_get_opc_assignments", 00:04:43.224 "vmd_rescan", 00:04:43.224 "vmd_remove_device", 00:04:43.224 "vmd_enable", 00:04:43.224 "sock_set_default_impl", 00:04:43.224 "sock_impl_set_options", 00:04:43.224 "sock_impl_get_options", 00:04:43.224 "iobuf_get_stats", 00:04:43.224 "iobuf_set_options", 00:04:43.224 "framework_get_pci_devices", 00:04:43.224 "framework_get_config", 00:04:43.224 "framework_get_subsystems", 00:04:43.224 "vfu_tgt_set_base_path", 00:04:43.224 "trace_get_info", 00:04:43.224 "trace_get_tpoint_group_mask", 00:04:43.224 "trace_disable_tpoint_group", 00:04:43.224 "trace_enable_tpoint_group", 00:04:43.224 "trace_clear_tpoint_mask", 00:04:43.224 "trace_set_tpoint_mask", 00:04:43.224 "spdk_get_version", 00:04:43.224 "rpc_get_methods" 00:04:43.224 ] 00:04:43.224 06:34:56 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:43.224 06:34:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:43.224 06:34:56 -- common/autotest_common.sh@10 -- # set +x 00:04:43.224 06:34:57 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:43.224 06:34:57 -- spdkcli/tcp.sh@38 -- # killprocess 54557 00:04:43.224 06:34:57 -- common/autotest_common.sh@936 -- # '[' -z 54557 ']' 00:04:43.224 06:34:57 -- common/autotest_common.sh@940 -- # kill -0 54557 00:04:43.224 06:34:57 -- common/autotest_common.sh@941 -- # uname 00:04:43.224 06:34:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:43.224 06:34:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54557 00:04:43.224 killing process with pid 54557 00:04:43.224 06:34:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:43.224 06:34:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:43.224 06:34:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54557' 00:04:43.224 06:34:57 -- common/autotest_common.sh@955 -- # kill 54557 00:04:43.224 06:34:57 -- common/autotest_common.sh@960 -- # wait 54557 00:04:43.484 ************************************ 00:04:43.484 END TEST spdkcli_tcp 00:04:43.484 ************************************ 00:04:43.484 00:04:43.484 real 0m1.718s 00:04:43.484 user 0m3.213s 00:04:43.484 sys 0m0.356s 00:04:43.484 06:34:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:43.484 06:34:57 -- common/autotest_common.sh@10 -- # set +x 00:04:43.484 06:34:57 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:43.484 06:34:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.484 06:34:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.484 06:34:57 -- common/autotest_common.sh@10 -- # set +x 00:04:43.484 ************************************ 00:04:43.484 START TEST dpdk_mem_utility 00:04:43.484 ************************************ 00:04:43.484 06:34:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:43.484 * Looking for test storage... 00:04:43.484 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:43.484 06:34:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:43.484 06:34:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:43.484 06:34:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:43.744 06:34:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:43.744 06:34:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:43.744 06:34:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:43.744 06:34:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:43.744 06:34:57 -- scripts/common.sh@335 -- # IFS=.-: 00:04:43.744 06:34:57 -- scripts/common.sh@335 -- # read -ra ver1 00:04:43.744 06:34:57 -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.744 06:34:57 -- scripts/common.sh@336 -- # read -ra ver2 00:04:43.744 06:34:57 -- scripts/common.sh@337 -- # local 'op=<' 00:04:43.744 06:34:57 -- scripts/common.sh@339 -- # ver1_l=2 00:04:43.744 06:34:57 -- scripts/common.sh@340 -- # ver2_l=1 00:04:43.744 06:34:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:43.744 06:34:57 -- scripts/common.sh@343 -- # case "$op" in 00:04:43.744 06:34:57 -- scripts/common.sh@344 -- # : 1 00:04:43.744 06:34:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:43.744 06:34:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.744 06:34:57 -- scripts/common.sh@364 -- # decimal 1 00:04:43.744 06:34:57 -- scripts/common.sh@352 -- # local d=1 00:04:43.744 06:34:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.744 06:34:57 -- scripts/common.sh@354 -- # echo 1 00:04:43.744 06:34:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:43.744 06:34:57 -- scripts/common.sh@365 -- # decimal 2 00:04:43.744 06:34:57 -- scripts/common.sh@352 -- # local d=2 00:04:43.744 06:34:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.744 06:34:57 -- scripts/common.sh@354 -- # echo 2 00:04:43.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.744 06:34:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:43.744 06:34:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:43.744 06:34:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:43.744 06:34:57 -- scripts/common.sh@367 -- # return 0 00:04:43.744 06:34:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.744 06:34:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:43.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.744 --rc genhtml_branch_coverage=1 00:04:43.744 --rc genhtml_function_coverage=1 00:04:43.744 --rc genhtml_legend=1 00:04:43.744 --rc geninfo_all_blocks=1 00:04:43.744 --rc geninfo_unexecuted_blocks=1 00:04:43.744 00:04:43.744 ' 00:04:43.744 06:34:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:43.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.744 --rc genhtml_branch_coverage=1 00:04:43.744 --rc genhtml_function_coverage=1 00:04:43.744 --rc genhtml_legend=1 00:04:43.744 --rc geninfo_all_blocks=1 00:04:43.744 --rc geninfo_unexecuted_blocks=1 00:04:43.744 00:04:43.744 ' 00:04:43.744 06:34:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:43.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.744 --rc genhtml_branch_coverage=1 00:04:43.744 --rc genhtml_function_coverage=1 00:04:43.744 --rc genhtml_legend=1 00:04:43.744 --rc geninfo_all_blocks=1 00:04:43.744 --rc geninfo_unexecuted_blocks=1 00:04:43.744 00:04:43.744 ' 00:04:43.744 06:34:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:43.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.744 --rc genhtml_branch_coverage=1 00:04:43.744 --rc genhtml_function_coverage=1 00:04:43.744 --rc genhtml_legend=1 00:04:43.744 --rc geninfo_all_blocks=1 00:04:43.744 --rc geninfo_unexecuted_blocks=1 00:04:43.744 00:04:43.744 ' 00:04:43.744 06:34:57 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:43.744 06:34:57 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=54649 00:04:43.744 06:34:57 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:43.744 06:34:57 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 54649 00:04:43.744 06:34:57 -- common/autotest_common.sh@829 -- # '[' -z 54649 ']' 00:04:43.744 06:34:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.744 06:34:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:43.744 06:34:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.744 06:34:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:43.744 06:34:57 -- common/autotest_common.sh@10 -- # set +x 00:04:43.744 [2024-12-14 06:34:57.585245] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:43.744 [2024-12-14 06:34:57.585583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54649 ] 00:04:43.744 [2024-12-14 06:34:57.715780] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.015 [2024-12-14 06:34:57.767883] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:44.015 [2024-12-14 06:34:57.768357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.638 06:34:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.638 06:34:58 -- common/autotest_common.sh@862 -- # return 0 00:04:44.638 06:34:58 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:44.638 06:34:58 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:44.638 06:34:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.638 06:34:58 -- common/autotest_common.sh@10 -- # set +x 00:04:44.638 { 00:04:44.638 "filename": "/tmp/spdk_mem_dump.txt" 00:04:44.638 } 00:04:44.638 06:34:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.638 06:34:58 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:44.638 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:44.638 1 heaps totaling size 814.000000 MiB 00:04:44.638 size: 814.000000 MiB heap id: 0 00:04:44.638 end heaps---------- 00:04:44.638 8 mempools totaling size 598.116089 MiB 00:04:44.638 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:44.638 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:44.638 size: 84.521057 MiB name: bdev_io_54649 00:04:44.638 size: 51.011292 MiB name: evtpool_54649 00:04:44.638 size: 50.003479 MiB name: msgpool_54649 00:04:44.638 size: 21.763794 MiB name: PDU_Pool 00:04:44.638 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:44.638 size: 0.026123 MiB name: Session_Pool 00:04:44.638 end mempools------- 00:04:44.638 6 memzones totaling size 4.142822 MiB 00:04:44.638 size: 1.000366 MiB name: RG_ring_0_54649 00:04:44.638 size: 1.000366 MiB name: RG_ring_1_54649 00:04:44.638 size: 1.000366 MiB name: RG_ring_4_54649 00:04:44.638 size: 1.000366 MiB name: RG_ring_5_54649 00:04:44.638 size: 0.125366 MiB name: RG_ring_2_54649 00:04:44.638 size: 0.015991 MiB name: RG_ring_3_54649 00:04:44.638 end memzones------- 00:04:44.638 06:34:58 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:44.638 heap id: 0 total size: 814.000000 MiB number of busy elements: 301 number of free elements: 15 00:04:44.638 list of free elements. size: 12.471741 MiB 00:04:44.638 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:44.638 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:44.638 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:44.638 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:44.638 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:44.638 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:44.638 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:44.638 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:44.638 element at address: 0x200000200000 with size: 0.832825 MiB 00:04:44.638 element at address: 0x20001aa00000 with size: 0.569336 MiB 00:04:44.638 element at address: 0x20000b200000 with size: 0.488892 MiB 00:04:44.638 element at address: 0x200000800000 with size: 0.486328 MiB 00:04:44.638 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:44.638 element at address: 0x200027e00000 with size: 0.395752 MiB 00:04:44.638 element at address: 0x200003a00000 with size: 0.347839 MiB 00:04:44.638 list of standard malloc elements. size: 199.265686 MiB 00:04:44.638 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:44.638 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:44.638 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:44.638 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:44.638 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:44.638 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:44.638 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:44.638 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:44.638 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:44.638 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:44.638 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:44.639 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:44.639 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:44.639 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:44.639 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:44.639 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20000087c800 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20000087c980 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:44.639 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a59180 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a59240 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a59300 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a59480 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a59540 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a59600 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a59780 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a59840 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a59900 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:44.639 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:44.639 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:44.639 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:44.639 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:44.639 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:44.639 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:44.639 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:44.640 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e65500 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:44.640 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:44.640 list of memzone associated elements. size: 602.262573 MiB 00:04:44.640 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:44.640 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:44.640 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:44.640 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:44.640 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:44.640 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_54649_0 00:04:44.640 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:44.640 associated memzone info: size: 48.002930 MiB name: MP_evtpool_54649_0 00:04:44.640 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:44.640 associated memzone info: size: 48.002930 MiB name: MP_msgpool_54649_0 00:04:44.640 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:44.640 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:44.640 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:44.640 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:44.640 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:44.640 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_54649 00:04:44.641 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:44.641 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_54649 00:04:44.641 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:44.641 associated memzone info: size: 1.007996 MiB name: MP_evtpool_54649 00:04:44.641 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:44.641 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:44.641 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:44.641 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:44.641 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:44.641 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:44.641 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:44.641 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:44.641 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:44.641 associated memzone info: size: 1.000366 MiB name: RG_ring_0_54649 00:04:44.641 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:44.641 associated memzone info: size: 1.000366 MiB name: RG_ring_1_54649 00:04:44.641 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:44.641 associated memzone info: size: 1.000366 MiB name: RG_ring_4_54649 00:04:44.641 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:44.641 associated memzone info: size: 1.000366 MiB name: RG_ring_5_54649 00:04:44.641 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:44.641 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_54649 00:04:44.641 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:44.641 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:44.641 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:44.641 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:44.641 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:44.641 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:44.641 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:44.641 associated memzone info: size: 0.125366 MiB name: RG_ring_2_54649 00:04:44.641 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:44.641 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:44.641 element at address: 0x200027e65680 with size: 0.023743 MiB 00:04:44.641 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:44.641 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:44.641 associated memzone info: size: 0.015991 MiB name: RG_ring_3_54649 00:04:44.641 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:04:44.641 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:44.641 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:04:44.641 associated memzone info: size: 0.000183 MiB name: MP_msgpool_54649 00:04:44.641 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:44.641 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_54649 00:04:44.641 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:04:44.641 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:44.641 06:34:58 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:44.641 06:34:58 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 54649 00:04:44.641 06:34:58 -- common/autotest_common.sh@936 -- # '[' -z 54649 ']' 00:04:44.641 06:34:58 -- common/autotest_common.sh@940 -- # kill -0 54649 00:04:44.641 06:34:58 -- common/autotest_common.sh@941 -- # uname 00:04:44.641 06:34:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:44.641 06:34:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54649 00:04:44.901 killing process with pid 54649 00:04:44.901 06:34:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:44.901 06:34:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:44.901 06:34:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54649' 00:04:44.901 06:34:58 -- common/autotest_common.sh@955 -- # kill 54649 00:04:44.901 06:34:58 -- common/autotest_common.sh@960 -- # wait 54649 00:04:45.160 00:04:45.161 real 0m1.545s 00:04:45.161 user 0m1.709s 00:04:45.161 sys 0m0.316s 00:04:45.161 06:34:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:45.161 ************************************ 00:04:45.161 END TEST dpdk_mem_utility 00:04:45.161 ************************************ 00:04:45.161 06:34:58 -- common/autotest_common.sh@10 -- # set +x 00:04:45.161 06:34:58 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:45.161 06:34:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:45.161 06:34:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.161 06:34:58 -- common/autotest_common.sh@10 -- # set +x 00:04:45.161 ************************************ 00:04:45.161 START TEST event 00:04:45.161 ************************************ 00:04:45.161 06:34:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:45.161 * Looking for test storage... 00:04:45.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:45.161 06:34:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:45.161 06:34:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:45.161 06:34:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:45.161 06:34:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:45.161 06:34:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:45.161 06:34:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:45.161 06:34:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:45.161 06:34:59 -- scripts/common.sh@335 -- # IFS=.-: 00:04:45.161 06:34:59 -- scripts/common.sh@335 -- # read -ra ver1 00:04:45.161 06:34:59 -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.161 06:34:59 -- scripts/common.sh@336 -- # read -ra ver2 00:04:45.161 06:34:59 -- scripts/common.sh@337 -- # local 'op=<' 00:04:45.161 06:34:59 -- scripts/common.sh@339 -- # ver1_l=2 00:04:45.161 06:34:59 -- scripts/common.sh@340 -- # ver2_l=1 00:04:45.161 06:34:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:45.161 06:34:59 -- scripts/common.sh@343 -- # case "$op" in 00:04:45.161 06:34:59 -- scripts/common.sh@344 -- # : 1 00:04:45.161 06:34:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:45.161 06:34:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.161 06:34:59 -- scripts/common.sh@364 -- # decimal 1 00:04:45.161 06:34:59 -- scripts/common.sh@352 -- # local d=1 00:04:45.161 06:34:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.161 06:34:59 -- scripts/common.sh@354 -- # echo 1 00:04:45.161 06:34:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:45.161 06:34:59 -- scripts/common.sh@365 -- # decimal 2 00:04:45.161 06:34:59 -- scripts/common.sh@352 -- # local d=2 00:04:45.161 06:34:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.161 06:34:59 -- scripts/common.sh@354 -- # echo 2 00:04:45.161 06:34:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:45.161 06:34:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:45.161 06:34:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:45.161 06:34:59 -- scripts/common.sh@367 -- # return 0 00:04:45.161 06:34:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.161 06:34:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:45.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.161 --rc genhtml_branch_coverage=1 00:04:45.161 --rc genhtml_function_coverage=1 00:04:45.161 --rc genhtml_legend=1 00:04:45.161 --rc geninfo_all_blocks=1 00:04:45.161 --rc geninfo_unexecuted_blocks=1 00:04:45.161 00:04:45.161 ' 00:04:45.161 06:34:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:45.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.161 --rc genhtml_branch_coverage=1 00:04:45.161 --rc genhtml_function_coverage=1 00:04:45.161 --rc genhtml_legend=1 00:04:45.161 --rc geninfo_all_blocks=1 00:04:45.161 --rc geninfo_unexecuted_blocks=1 00:04:45.161 00:04:45.161 ' 00:04:45.161 06:34:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:45.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.161 --rc genhtml_branch_coverage=1 00:04:45.161 --rc genhtml_function_coverage=1 00:04:45.161 --rc genhtml_legend=1 00:04:45.161 --rc geninfo_all_blocks=1 00:04:45.161 --rc geninfo_unexecuted_blocks=1 00:04:45.161 00:04:45.161 ' 00:04:45.161 06:34:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:45.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.161 --rc genhtml_branch_coverage=1 00:04:45.161 --rc genhtml_function_coverage=1 00:04:45.161 --rc genhtml_legend=1 00:04:45.161 --rc geninfo_all_blocks=1 00:04:45.161 --rc geninfo_unexecuted_blocks=1 00:04:45.161 00:04:45.161 ' 00:04:45.161 06:34:59 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:45.161 06:34:59 -- bdev/nbd_common.sh@6 -- # set -e 00:04:45.161 06:34:59 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:45.161 06:34:59 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:04:45.161 06:34:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.161 06:34:59 -- common/autotest_common.sh@10 -- # set +x 00:04:45.161 ************************************ 00:04:45.161 START TEST event_perf 00:04:45.161 ************************************ 00:04:45.161 06:34:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:45.421 Running I/O for 1 seconds...[2024-12-14 06:34:59.154060] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:45.421 [2024-12-14 06:34:59.154329] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54728 ] 00:04:45.421 [2024-12-14 06:34:59.285869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:45.421 [2024-12-14 06:34:59.335785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.421 [2024-12-14 06:34:59.335935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:45.421 [2024-12-14 06:34:59.336041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:45.421 [2024-12-14 06:34:59.336047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.809 Running I/O for 1 seconds... 00:04:46.809 lcore 0: 204599 00:04:46.809 lcore 1: 204596 00:04:46.809 lcore 2: 204597 00:04:46.809 lcore 3: 204598 00:04:46.809 done. 00:04:46.809 00:04:46.809 real 0m1.295s 00:04:46.809 user 0m4.123s 00:04:46.809 sys 0m0.050s 00:04:46.809 06:35:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:46.809 ************************************ 00:04:46.809 END TEST event_perf 00:04:46.809 ************************************ 00:04:46.809 06:35:00 -- common/autotest_common.sh@10 -- # set +x 00:04:46.809 06:35:00 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:46.809 06:35:00 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:46.809 06:35:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.809 06:35:00 -- common/autotest_common.sh@10 -- # set +x 00:04:46.809 ************************************ 00:04:46.809 START TEST event_reactor 00:04:46.809 ************************************ 00:04:46.809 06:35:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:46.809 [2024-12-14 06:35:00.493725] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:46.809 [2024-12-14 06:35:00.494007] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54761 ] 00:04:46.809 [2024-12-14 06:35:00.623548] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.809 [2024-12-14 06:35:00.676593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.191 test_start 00:04:48.191 oneshot 00:04:48.191 tick 100 00:04:48.191 tick 100 00:04:48.191 tick 250 00:04:48.191 tick 100 00:04:48.191 tick 100 00:04:48.191 tick 100 00:04:48.191 tick 250 00:04:48.191 tick 500 00:04:48.191 tick 100 00:04:48.191 tick 100 00:04:48.191 tick 250 00:04:48.191 tick 100 00:04:48.191 tick 100 00:04:48.191 test_end 00:04:48.191 ************************************ 00:04:48.191 END TEST event_reactor 00:04:48.191 ************************************ 00:04:48.191 00:04:48.191 real 0m1.283s 00:04:48.191 user 0m1.139s 00:04:48.191 sys 0m0.038s 00:04:48.191 06:35:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:48.191 06:35:01 -- common/autotest_common.sh@10 -- # set +x 00:04:48.191 06:35:01 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:48.191 06:35:01 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:48.191 06:35:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.191 06:35:01 -- common/autotest_common.sh@10 -- # set +x 00:04:48.191 ************************************ 00:04:48.191 START TEST event_reactor_perf 00:04:48.191 ************************************ 00:04:48.191 06:35:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:48.191 [2024-12-14 06:35:01.828259] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:48.191 [2024-12-14 06:35:01.828346] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54802 ] 00:04:48.191 [2024-12-14 06:35:01.964918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.191 [2024-12-14 06:35:02.014513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.129 test_start 00:04:49.129 test_end 00:04:49.129 Performance: 437998 events per second 00:04:49.129 00:04:49.129 real 0m1.285s 00:04:49.129 user 0m1.137s 00:04:49.129 sys 0m0.044s 00:04:49.129 06:35:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:49.129 ************************************ 00:04:49.129 END TEST event_reactor_perf 00:04:49.129 ************************************ 00:04:49.129 06:35:03 -- common/autotest_common.sh@10 -- # set +x 00:04:49.389 06:35:03 -- event/event.sh@49 -- # uname -s 00:04:49.389 06:35:03 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:49.389 06:35:03 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:49.389 06:35:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.389 06:35:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.389 06:35:03 -- common/autotest_common.sh@10 -- # set +x 00:04:49.389 ************************************ 00:04:49.389 START TEST event_scheduler 00:04:49.389 ************************************ 00:04:49.389 06:35:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:49.389 * Looking for test storage... 00:04:49.389 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:49.389 06:35:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:49.389 06:35:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:49.389 06:35:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:49.389 06:35:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:49.389 06:35:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:49.389 06:35:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:49.389 06:35:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:49.389 06:35:03 -- scripts/common.sh@335 -- # IFS=.-: 00:04:49.389 06:35:03 -- scripts/common.sh@335 -- # read -ra ver1 00:04:49.389 06:35:03 -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.389 06:35:03 -- scripts/common.sh@336 -- # read -ra ver2 00:04:49.389 06:35:03 -- scripts/common.sh@337 -- # local 'op=<' 00:04:49.389 06:35:03 -- scripts/common.sh@339 -- # ver1_l=2 00:04:49.389 06:35:03 -- scripts/common.sh@340 -- # ver2_l=1 00:04:49.389 06:35:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:49.389 06:35:03 -- scripts/common.sh@343 -- # case "$op" in 00:04:49.389 06:35:03 -- scripts/common.sh@344 -- # : 1 00:04:49.389 06:35:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:49.389 06:35:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.389 06:35:03 -- scripts/common.sh@364 -- # decimal 1 00:04:49.389 06:35:03 -- scripts/common.sh@352 -- # local d=1 00:04:49.389 06:35:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.389 06:35:03 -- scripts/common.sh@354 -- # echo 1 00:04:49.389 06:35:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:49.389 06:35:03 -- scripts/common.sh@365 -- # decimal 2 00:04:49.389 06:35:03 -- scripts/common.sh@352 -- # local d=2 00:04:49.389 06:35:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.389 06:35:03 -- scripts/common.sh@354 -- # echo 2 00:04:49.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.389 06:35:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:49.389 06:35:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:49.389 06:35:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:49.389 06:35:03 -- scripts/common.sh@367 -- # return 0 00:04:49.389 06:35:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.389 06:35:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:49.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.389 --rc genhtml_branch_coverage=1 00:04:49.389 --rc genhtml_function_coverage=1 00:04:49.389 --rc genhtml_legend=1 00:04:49.389 --rc geninfo_all_blocks=1 00:04:49.389 --rc geninfo_unexecuted_blocks=1 00:04:49.389 00:04:49.389 ' 00:04:49.389 06:35:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:49.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.389 --rc genhtml_branch_coverage=1 00:04:49.389 --rc genhtml_function_coverage=1 00:04:49.389 --rc genhtml_legend=1 00:04:49.389 --rc geninfo_all_blocks=1 00:04:49.389 --rc geninfo_unexecuted_blocks=1 00:04:49.389 00:04:49.389 ' 00:04:49.389 06:35:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:49.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.389 --rc genhtml_branch_coverage=1 00:04:49.389 --rc genhtml_function_coverage=1 00:04:49.389 --rc genhtml_legend=1 00:04:49.389 --rc geninfo_all_blocks=1 00:04:49.390 --rc geninfo_unexecuted_blocks=1 00:04:49.390 00:04:49.390 ' 00:04:49.390 06:35:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:49.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.390 --rc genhtml_branch_coverage=1 00:04:49.390 --rc genhtml_function_coverage=1 00:04:49.390 --rc genhtml_legend=1 00:04:49.390 --rc geninfo_all_blocks=1 00:04:49.390 --rc geninfo_unexecuted_blocks=1 00:04:49.390 00:04:49.390 ' 00:04:49.390 06:35:03 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:49.390 06:35:03 -- scheduler/scheduler.sh@35 -- # scheduler_pid=54865 00:04:49.390 06:35:03 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.390 06:35:03 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:49.390 06:35:03 -- scheduler/scheduler.sh@37 -- # waitforlisten 54865 00:04:49.390 06:35:03 -- common/autotest_common.sh@829 -- # '[' -z 54865 ']' 00:04:49.390 06:35:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.390 06:35:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:49.390 06:35:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.390 06:35:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:49.390 06:35:03 -- common/autotest_common.sh@10 -- # set +x 00:04:49.390 [2024-12-14 06:35:03.373135] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:49.390 [2024-12-14 06:35:03.373369] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54865 ] 00:04:49.650 [2024-12-14 06:35:03.509479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:49.650 [2024-12-14 06:35:03.579597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.650 [2024-12-14 06:35:03.579694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.650 [2024-12-14 06:35:03.579846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:49.650 [2024-12-14 06:35:03.579852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:49.650 06:35:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:49.650 06:35:03 -- common/autotest_common.sh@862 -- # return 0 00:04:49.650 06:35:03 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:49.650 06:35:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.650 06:35:03 -- common/autotest_common.sh@10 -- # set +x 00:04:49.650 POWER: Env isn't set yet! 00:04:49.650 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:49.650 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:49.650 POWER: Cannot set governor of lcore 0 to userspace 00:04:49.650 POWER: Attempting to initialise PSTAT power management... 00:04:49.650 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:49.650 POWER: Cannot set governor of lcore 0 to performance 00:04:49.650 POWER: Attempting to initialise AMD PSTATE power management... 00:04:49.650 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:49.650 POWER: Cannot set governor of lcore 0 to userspace 00:04:49.650 POWER: Attempting to initialise CPPC power management... 00:04:49.650 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:49.650 POWER: Cannot set governor of lcore 0 to userspace 00:04:49.650 POWER: Attempting to initialise VM power management... 00:04:49.650 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:49.650 POWER: Unable to set Power Management Environment for lcore 0 00:04:49.650 [2024-12-14 06:35:03.628592] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:04:49.650 [2024-12-14 06:35:03.628608] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:04:49.650 [2024-12-14 06:35:03.628618] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:04:49.650 [2024-12-14 06:35:03.628633] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:49.650 [2024-12-14 06:35:03.628643] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:49.650 [2024-12-14 06:35:03.628651] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:49.650 06:35:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.650 06:35:03 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:49.650 06:35:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.650 06:35:03 -- common/autotest_common.sh@10 -- # set +x 00:04:49.910 [2024-12-14 06:35:03.685965] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:49.910 06:35:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.910 06:35:03 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:49.910 06:35:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.910 06:35:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.910 06:35:03 -- common/autotest_common.sh@10 -- # set +x 00:04:49.910 ************************************ 00:04:49.910 START TEST scheduler_create_thread 00:04:49.910 ************************************ 00:04:49.910 06:35:03 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:04:49.910 06:35:03 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:49.910 06:35:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.910 06:35:03 -- common/autotest_common.sh@10 -- # set +x 00:04:49.910 2 00:04:49.910 06:35:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.910 06:35:03 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:49.910 06:35:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.910 06:35:03 -- common/autotest_common.sh@10 -- # set +x 00:04:49.910 3 00:04:49.910 06:35:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.910 06:35:03 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:49.910 06:35:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.910 06:35:03 -- common/autotest_common.sh@10 -- # set +x 00:04:49.910 4 00:04:49.910 06:35:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.910 06:35:03 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:49.910 06:35:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.910 06:35:03 -- common/autotest_common.sh@10 -- # set +x 00:04:49.910 5 00:04:49.910 06:35:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.910 06:35:03 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:49.910 06:35:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.910 06:35:03 -- common/autotest_common.sh@10 -- # set +x 00:04:49.910 6 00:04:49.910 06:35:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.910 06:35:03 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:49.910 06:35:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.910 06:35:03 -- common/autotest_common.sh@10 -- # set +x 00:04:49.910 7 00:04:49.910 06:35:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.910 06:35:03 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:49.910 06:35:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.910 06:35:03 -- common/autotest_common.sh@10 -- # set +x 00:04:49.910 8 00:04:49.910 06:35:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.910 06:35:03 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:49.910 06:35:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.910 06:35:03 -- common/autotest_common.sh@10 -- # set +x 00:04:49.910 9 00:04:49.910 06:35:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.910 06:35:03 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:49.910 06:35:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.910 06:35:03 -- common/autotest_common.sh@10 -- # set +x 00:04:49.910 10 00:04:49.910 06:35:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.910 06:35:03 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:49.910 06:35:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.910 06:35:03 -- common/autotest_common.sh@10 -- # set +x 00:04:49.910 06:35:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.910 06:35:03 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:49.910 06:35:03 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:49.910 06:35:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.910 06:35:03 -- common/autotest_common.sh@10 -- # set +x 00:04:49.910 06:35:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.910 06:35:03 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:49.910 06:35:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.910 06:35:03 -- common/autotest_common.sh@10 -- # set +x 00:04:49.910 06:35:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.910 06:35:03 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:49.910 06:35:03 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:49.910 06:35:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.910 06:35:03 -- common/autotest_common.sh@10 -- # set +x 00:04:51.293 ************************************ 00:04:51.293 END TEST scheduler_create_thread 00:04:51.293 ************************************ 00:04:51.293 06:35:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.293 00:04:51.293 real 0m1.171s 00:04:51.293 user 0m0.020s 00:04:51.293 sys 0m0.005s 00:04:51.293 06:35:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:51.294 06:35:04 -- common/autotest_common.sh@10 -- # set +x 00:04:51.294 06:35:04 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:51.294 06:35:04 -- scheduler/scheduler.sh@46 -- # killprocess 54865 00:04:51.294 06:35:04 -- common/autotest_common.sh@936 -- # '[' -z 54865 ']' 00:04:51.294 06:35:04 -- common/autotest_common.sh@940 -- # kill -0 54865 00:04:51.294 06:35:04 -- common/autotest_common.sh@941 -- # uname 00:04:51.294 06:35:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:51.294 06:35:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54865 00:04:51.294 killing process with pid 54865 00:04:51.294 06:35:04 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:04:51.294 06:35:04 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:04:51.294 06:35:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54865' 00:04:51.294 06:35:04 -- common/autotest_common.sh@955 -- # kill 54865 00:04:51.294 06:35:04 -- common/autotest_common.sh@960 -- # wait 54865 00:04:51.557 [2024-12-14 06:35:05.348018] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:51.557 ************************************ 00:04:51.557 END TEST event_scheduler 00:04:51.557 ************************************ 00:04:51.557 00:04:51.557 real 0m2.364s 00:04:51.557 user 0m2.577s 00:04:51.557 sys 0m0.312s 00:04:51.557 06:35:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:51.557 06:35:05 -- common/autotest_common.sh@10 -- # set +x 00:04:51.817 06:35:05 -- event/event.sh@51 -- # modprobe -n nbd 00:04:51.817 06:35:05 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:51.817 06:35:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.817 06:35:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.817 06:35:05 -- common/autotest_common.sh@10 -- # set +x 00:04:51.817 ************************************ 00:04:51.817 START TEST app_repeat 00:04:51.817 ************************************ 00:04:51.817 06:35:05 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:04:51.817 06:35:05 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.817 06:35:05 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.817 06:35:05 -- event/event.sh@13 -- # local nbd_list 00:04:51.817 06:35:05 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.817 06:35:05 -- event/event.sh@14 -- # local bdev_list 00:04:51.817 06:35:05 -- event/event.sh@15 -- # local repeat_times=4 00:04:51.817 06:35:05 -- event/event.sh@17 -- # modprobe nbd 00:04:51.817 Process app_repeat pid: 54935 00:04:51.817 spdk_app_start Round 0 00:04:51.817 06:35:05 -- event/event.sh@19 -- # repeat_pid=54935 00:04:51.817 06:35:05 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.817 06:35:05 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 54935' 00:04:51.817 06:35:05 -- event/event.sh@23 -- # for i in {0..2} 00:04:51.817 06:35:05 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:51.817 06:35:05 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:51.817 06:35:05 -- event/event.sh@25 -- # waitforlisten 54935 /var/tmp/spdk-nbd.sock 00:04:51.817 06:35:05 -- common/autotest_common.sh@829 -- # '[' -z 54935 ']' 00:04:51.817 06:35:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:51.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:51.817 06:35:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.817 06:35:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:51.817 06:35:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.817 06:35:05 -- common/autotest_common.sh@10 -- # set +x 00:04:51.817 [2024-12-14 06:35:05.606082] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:51.817 [2024-12-14 06:35:05.606175] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54935 ] 00:04:51.817 [2024-12-14 06:35:05.743221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:52.077 [2024-12-14 06:35:05.813532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.077 [2024-12-14 06:35:05.813550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.645 06:35:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.645 06:35:06 -- common/autotest_common.sh@862 -- # return 0 00:04:52.645 06:35:06 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.905 Malloc0 00:04:52.905 06:35:06 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.164 Malloc1 00:04:53.164 06:35:07 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.164 06:35:07 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.164 06:35:07 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.164 06:35:07 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:53.164 06:35:07 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.164 06:35:07 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:53.165 06:35:07 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.165 06:35:07 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.165 06:35:07 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.165 06:35:07 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:53.165 06:35:07 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.165 06:35:07 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:53.165 06:35:07 -- bdev/nbd_common.sh@12 -- # local i 00:04:53.165 06:35:07 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:53.165 06:35:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.165 06:35:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:53.424 /dev/nbd0 00:04:53.424 06:35:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:53.424 06:35:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:53.424 06:35:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:53.424 06:35:07 -- common/autotest_common.sh@867 -- # local i 00:04:53.424 06:35:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:53.424 06:35:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:53.424 06:35:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:53.424 06:35:07 -- common/autotest_common.sh@871 -- # break 00:04:53.424 06:35:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:53.424 06:35:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:53.424 06:35:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:53.424 1+0 records in 00:04:53.424 1+0 records out 00:04:53.424 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260184 s, 15.7 MB/s 00:04:53.424 06:35:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:53.424 06:35:07 -- common/autotest_common.sh@884 -- # size=4096 00:04:53.424 06:35:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:53.424 06:35:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:53.424 06:35:07 -- common/autotest_common.sh@887 -- # return 0 00:04:53.424 06:35:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:53.424 06:35:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.424 06:35:07 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:53.684 /dev/nbd1 00:04:53.685 06:35:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:53.685 06:35:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:53.685 06:35:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:53.685 06:35:07 -- common/autotest_common.sh@867 -- # local i 00:04:53.685 06:35:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:53.685 06:35:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:53.685 06:35:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:53.685 06:35:07 -- common/autotest_common.sh@871 -- # break 00:04:53.685 06:35:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:53.685 06:35:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:53.685 06:35:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:53.685 1+0 records in 00:04:53.685 1+0 records out 00:04:53.685 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031102 s, 13.2 MB/s 00:04:53.685 06:35:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:53.685 06:35:07 -- common/autotest_common.sh@884 -- # size=4096 00:04:53.685 06:35:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:53.685 06:35:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:53.685 06:35:07 -- common/autotest_common.sh@887 -- # return 0 00:04:53.685 06:35:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:53.685 06:35:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.685 06:35:07 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.685 06:35:07 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.685 06:35:07 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.945 06:35:07 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:53.945 { 00:04:53.945 "nbd_device": "/dev/nbd0", 00:04:53.945 "bdev_name": "Malloc0" 00:04:53.945 }, 00:04:53.945 { 00:04:53.945 "nbd_device": "/dev/nbd1", 00:04:53.945 "bdev_name": "Malloc1" 00:04:53.945 } 00:04:53.945 ]' 00:04:53.945 06:35:07 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:53.945 { 00:04:53.945 "nbd_device": "/dev/nbd0", 00:04:53.945 "bdev_name": "Malloc0" 00:04:53.945 }, 00:04:53.945 { 00:04:53.945 "nbd_device": "/dev/nbd1", 00:04:53.945 "bdev_name": "Malloc1" 00:04:53.945 } 00:04:53.945 ]' 00:04:53.945 06:35:07 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.945 06:35:07 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:53.945 /dev/nbd1' 00:04:53.945 06:35:07 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:53.945 /dev/nbd1' 00:04:53.945 06:35:07 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.945 06:35:07 -- bdev/nbd_common.sh@65 -- # count=2 00:04:53.945 06:35:07 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:53.945 06:35:07 -- bdev/nbd_common.sh@95 -- # count=2 00:04:53.945 06:35:07 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:53.945 06:35:07 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:53.945 06:35:07 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.945 06:35:07 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.945 06:35:07 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:53.945 06:35:07 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:53.946 06:35:07 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:53.946 06:35:07 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:54.205 256+0 records in 00:04:54.205 256+0 records out 00:04:54.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00813867 s, 129 MB/s 00:04:54.205 06:35:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.205 06:35:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:54.205 256+0 records in 00:04:54.205 256+0 records out 00:04:54.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213354 s, 49.1 MB/s 00:04:54.205 06:35:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.205 06:35:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:54.205 256+0 records in 00:04:54.205 256+0 records out 00:04:54.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024928 s, 42.1 MB/s 00:04:54.205 06:35:07 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:54.205 06:35:07 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.205 06:35:07 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.205 06:35:07 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:54.205 06:35:07 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:54.205 06:35:07 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:54.205 06:35:07 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:54.205 06:35:07 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.205 06:35:07 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:54.205 06:35:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.205 06:35:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:54.205 06:35:08 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:54.205 06:35:08 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:54.205 06:35:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.205 06:35:08 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.205 06:35:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:54.205 06:35:08 -- bdev/nbd_common.sh@51 -- # local i 00:04:54.205 06:35:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.205 06:35:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:54.465 06:35:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:54.465 06:35:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:54.465 06:35:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:54.465 06:35:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:54.465 06:35:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:54.465 06:35:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:54.465 06:35:08 -- bdev/nbd_common.sh@41 -- # break 00:04:54.465 06:35:08 -- bdev/nbd_common.sh@45 -- # return 0 00:04:54.465 06:35:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.465 06:35:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:54.724 06:35:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:54.724 06:35:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:54.724 06:35:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:54.724 06:35:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:54.724 06:35:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:54.724 06:35:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:54.724 06:35:08 -- bdev/nbd_common.sh@41 -- # break 00:04:54.724 06:35:08 -- bdev/nbd_common.sh@45 -- # return 0 00:04:54.724 06:35:08 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.724 06:35:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.724 06:35:08 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.984 06:35:08 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:54.984 06:35:08 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:54.984 06:35:08 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.984 06:35:08 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:54.984 06:35:08 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:54.984 06:35:08 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.984 06:35:08 -- bdev/nbd_common.sh@65 -- # true 00:04:54.984 06:35:08 -- bdev/nbd_common.sh@65 -- # count=0 00:04:54.984 06:35:08 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:54.984 06:35:08 -- bdev/nbd_common.sh@104 -- # count=0 00:04:54.984 06:35:08 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:54.984 06:35:08 -- bdev/nbd_common.sh@109 -- # return 0 00:04:54.984 06:35:08 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:55.243 06:35:09 -- event/event.sh@35 -- # sleep 3 00:04:55.502 [2024-12-14 06:35:09.246522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:55.502 [2024-12-14 06:35:09.294532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.502 [2024-12-14 06:35:09.294543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.502 [2024-12-14 06:35:09.323443] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:55.502 [2024-12-14 06:35:09.323510] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:58.794 06:35:12 -- event/event.sh@23 -- # for i in {0..2} 00:04:58.794 06:35:12 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:58.794 spdk_app_start Round 1 00:04:58.794 06:35:12 -- event/event.sh@25 -- # waitforlisten 54935 /var/tmp/spdk-nbd.sock 00:04:58.794 06:35:12 -- common/autotest_common.sh@829 -- # '[' -z 54935 ']' 00:04:58.794 06:35:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:58.794 06:35:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:58.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:58.794 06:35:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:58.794 06:35:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:58.794 06:35:12 -- common/autotest_common.sh@10 -- # set +x 00:04:58.794 06:35:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.794 06:35:12 -- common/autotest_common.sh@862 -- # return 0 00:04:58.794 06:35:12 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.794 Malloc0 00:04:58.794 06:35:12 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.053 Malloc1 00:04:59.053 06:35:12 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.053 06:35:12 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.053 06:35:12 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.053 06:35:12 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:59.053 06:35:12 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.053 06:35:12 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:59.053 06:35:12 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.053 06:35:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.053 06:35:12 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.053 06:35:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:59.053 06:35:12 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.053 06:35:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:59.053 06:35:12 -- bdev/nbd_common.sh@12 -- # local i 00:04:59.053 06:35:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:59.053 06:35:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.053 06:35:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:59.313 /dev/nbd0 00:04:59.313 06:35:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:59.313 06:35:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:59.313 06:35:13 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:59.313 06:35:13 -- common/autotest_common.sh@867 -- # local i 00:04:59.313 06:35:13 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:59.313 06:35:13 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:59.313 06:35:13 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:59.313 06:35:13 -- common/autotest_common.sh@871 -- # break 00:04:59.313 06:35:13 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:59.313 06:35:13 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:59.313 06:35:13 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.313 1+0 records in 00:04:59.313 1+0 records out 00:04:59.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303086 s, 13.5 MB/s 00:04:59.313 06:35:13 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.313 06:35:13 -- common/autotest_common.sh@884 -- # size=4096 00:04:59.313 06:35:13 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.313 06:35:13 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:59.313 06:35:13 -- common/autotest_common.sh@887 -- # return 0 00:04:59.313 06:35:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.313 06:35:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.313 06:35:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:59.573 /dev/nbd1 00:04:59.573 06:35:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:59.573 06:35:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:59.573 06:35:13 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:59.573 06:35:13 -- common/autotest_common.sh@867 -- # local i 00:04:59.573 06:35:13 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:59.573 06:35:13 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:59.573 06:35:13 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:59.573 06:35:13 -- common/autotest_common.sh@871 -- # break 00:04:59.573 06:35:13 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:59.573 06:35:13 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:59.573 06:35:13 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.573 1+0 records in 00:04:59.573 1+0 records out 00:04:59.573 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229396 s, 17.9 MB/s 00:04:59.573 06:35:13 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.573 06:35:13 -- common/autotest_common.sh@884 -- # size=4096 00:04:59.573 06:35:13 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.573 06:35:13 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:59.573 06:35:13 -- common/autotest_common.sh@887 -- # return 0 00:04:59.573 06:35:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.573 06:35:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.573 06:35:13 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.573 06:35:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.573 06:35:13 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.832 06:35:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:59.832 { 00:04:59.832 "nbd_device": "/dev/nbd0", 00:04:59.832 "bdev_name": "Malloc0" 00:04:59.832 }, 00:04:59.832 { 00:04:59.832 "nbd_device": "/dev/nbd1", 00:04:59.832 "bdev_name": "Malloc1" 00:04:59.832 } 00:04:59.832 ]' 00:04:59.832 06:35:13 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:59.832 { 00:04:59.832 "nbd_device": "/dev/nbd0", 00:04:59.832 "bdev_name": "Malloc0" 00:04:59.832 }, 00:04:59.832 { 00:04:59.832 "nbd_device": "/dev/nbd1", 00:04:59.832 "bdev_name": "Malloc1" 00:04:59.832 } 00:04:59.832 ]' 00:04:59.832 06:35:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.833 06:35:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:59.833 /dev/nbd1' 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:00.092 /dev/nbd1' 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@65 -- # count=2 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@95 -- # count=2 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:00.092 256+0 records in 00:05:00.092 256+0 records out 00:05:00.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00784304 s, 134 MB/s 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:00.092 256+0 records in 00:05:00.092 256+0 records out 00:05:00.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222969 s, 47.0 MB/s 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:00.092 256+0 records in 00:05:00.092 256+0 records out 00:05:00.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245136 s, 42.8 MB/s 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@51 -- # local i 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:00.092 06:35:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:00.351 06:35:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:00.352 06:35:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:00.352 06:35:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:00.352 06:35:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.352 06:35:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.352 06:35:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:00.352 06:35:14 -- bdev/nbd_common.sh@41 -- # break 00:05:00.352 06:35:14 -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.352 06:35:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:00.352 06:35:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:00.611 06:35:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:00.611 06:35:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:00.611 06:35:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:00.611 06:35:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.611 06:35:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.611 06:35:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:00.611 06:35:14 -- bdev/nbd_common.sh@41 -- # break 00:05:00.611 06:35:14 -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.611 06:35:14 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.611 06:35:14 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.611 06:35:14 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.871 06:35:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:00.871 06:35:14 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:00.871 06:35:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.871 06:35:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:00.871 06:35:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.871 06:35:14 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:00.871 06:35:14 -- bdev/nbd_common.sh@65 -- # true 00:05:00.871 06:35:14 -- bdev/nbd_common.sh@65 -- # count=0 00:05:00.871 06:35:14 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:00.871 06:35:14 -- bdev/nbd_common.sh@104 -- # count=0 00:05:00.871 06:35:14 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:00.871 06:35:14 -- bdev/nbd_common.sh@109 -- # return 0 00:05:00.871 06:35:14 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:01.440 06:35:15 -- event/event.sh@35 -- # sleep 3 00:05:01.440 [2024-12-14 06:35:15.268789] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:01.440 [2024-12-14 06:35:15.316684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.440 [2024-12-14 06:35:15.316690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.440 [2024-12-14 06:35:15.345362] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:01.440 [2024-12-14 06:35:15.345464] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:04.731 06:35:18 -- event/event.sh@23 -- # for i in {0..2} 00:05:04.731 spdk_app_start Round 2 00:05:04.731 06:35:18 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:04.731 06:35:18 -- event/event.sh@25 -- # waitforlisten 54935 /var/tmp/spdk-nbd.sock 00:05:04.731 06:35:18 -- common/autotest_common.sh@829 -- # '[' -z 54935 ']' 00:05:04.731 06:35:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:04.731 06:35:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:04.731 06:35:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:04.731 06:35:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.731 06:35:18 -- common/autotest_common.sh@10 -- # set +x 00:05:04.731 06:35:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.731 06:35:18 -- common/autotest_common.sh@862 -- # return 0 00:05:04.731 06:35:18 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.731 Malloc0 00:05:04.731 06:35:18 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.991 Malloc1 00:05:04.991 06:35:18 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.991 06:35:18 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.991 06:35:18 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.991 06:35:18 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:04.991 06:35:18 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.991 06:35:18 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:04.991 06:35:18 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.991 06:35:18 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.991 06:35:18 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.991 06:35:18 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:04.991 06:35:18 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.991 06:35:18 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:04.991 06:35:18 -- bdev/nbd_common.sh@12 -- # local i 00:05:04.991 06:35:18 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:04.991 06:35:18 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.991 06:35:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:05.250 /dev/nbd0 00:05:05.251 06:35:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:05.251 06:35:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:05.251 06:35:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:05.251 06:35:19 -- common/autotest_common.sh@867 -- # local i 00:05:05.251 06:35:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:05.251 06:35:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:05.251 06:35:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:05.251 06:35:19 -- common/autotest_common.sh@871 -- # break 00:05:05.251 06:35:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:05.251 06:35:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:05.251 06:35:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.251 1+0 records in 00:05:05.251 1+0 records out 00:05:05.251 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256407 s, 16.0 MB/s 00:05:05.251 06:35:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.251 06:35:19 -- common/autotest_common.sh@884 -- # size=4096 00:05:05.251 06:35:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.251 06:35:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:05.251 06:35:19 -- common/autotest_common.sh@887 -- # return 0 00:05:05.251 06:35:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.251 06:35:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.251 06:35:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:05.510 /dev/nbd1 00:05:05.510 06:35:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:05.510 06:35:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:05.510 06:35:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:05.510 06:35:19 -- common/autotest_common.sh@867 -- # local i 00:05:05.510 06:35:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:05.510 06:35:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:05.510 06:35:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:05.510 06:35:19 -- common/autotest_common.sh@871 -- # break 00:05:05.510 06:35:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:05.510 06:35:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:05.510 06:35:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.510 1+0 records in 00:05:05.510 1+0 records out 00:05:05.510 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312886 s, 13.1 MB/s 00:05:05.510 06:35:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.510 06:35:19 -- common/autotest_common.sh@884 -- # size=4096 00:05:05.510 06:35:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.510 06:35:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:05.510 06:35:19 -- common/autotest_common.sh@887 -- # return 0 00:05:05.510 06:35:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.510 06:35:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.510 06:35:19 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.510 06:35:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.510 06:35:19 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.770 06:35:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:05.770 { 00:05:05.770 "nbd_device": "/dev/nbd0", 00:05:05.770 "bdev_name": "Malloc0" 00:05:05.770 }, 00:05:05.770 { 00:05:05.770 "nbd_device": "/dev/nbd1", 00:05:05.770 "bdev_name": "Malloc1" 00:05:05.770 } 00:05:05.770 ]' 00:05:05.770 06:35:19 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:05.770 { 00:05:05.770 "nbd_device": "/dev/nbd0", 00:05:05.770 "bdev_name": "Malloc0" 00:05:05.770 }, 00:05:05.770 { 00:05:05.770 "nbd_device": "/dev/nbd1", 00:05:05.770 "bdev_name": "Malloc1" 00:05:05.770 } 00:05:05.770 ]' 00:05:05.770 06:35:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:06.030 /dev/nbd1' 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:06.030 /dev/nbd1' 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@65 -- # count=2 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@95 -- # count=2 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:06.030 256+0 records in 00:05:06.030 256+0 records out 00:05:06.030 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00721368 s, 145 MB/s 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:06.030 256+0 records in 00:05:06.030 256+0 records out 00:05:06.030 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222646 s, 47.1 MB/s 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:06.030 256+0 records in 00:05:06.030 256+0 records out 00:05:06.030 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253221 s, 41.4 MB/s 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@51 -- # local i 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.030 06:35:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:06.289 06:35:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:06.289 06:35:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:06.289 06:35:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:06.289 06:35:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.289 06:35:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.289 06:35:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:06.289 06:35:20 -- bdev/nbd_common.sh@41 -- # break 00:05:06.289 06:35:20 -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.289 06:35:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.289 06:35:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:06.548 06:35:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:06.548 06:35:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:06.548 06:35:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:06.548 06:35:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.549 06:35:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.549 06:35:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:06.549 06:35:20 -- bdev/nbd_common.sh@41 -- # break 00:05:06.549 06:35:20 -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.549 06:35:20 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.549 06:35:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.549 06:35:20 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.808 06:35:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:06.808 06:35:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.808 06:35:20 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:06.808 06:35:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:06.808 06:35:20 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:06.808 06:35:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.808 06:35:20 -- bdev/nbd_common.sh@65 -- # true 00:05:06.808 06:35:20 -- bdev/nbd_common.sh@65 -- # count=0 00:05:06.808 06:35:20 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:06.808 06:35:20 -- bdev/nbd_common.sh@104 -- # count=0 00:05:06.808 06:35:20 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:06.808 06:35:20 -- bdev/nbd_common.sh@109 -- # return 0 00:05:06.808 06:35:20 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:07.108 06:35:20 -- event/event.sh@35 -- # sleep 3 00:05:07.369 [2024-12-14 06:35:21.100209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:07.369 [2024-12-14 06:35:21.154663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.369 [2024-12-14 06:35:21.154670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.369 [2024-12-14 06:35:21.184987] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:07.369 [2024-12-14 06:35:21.185056] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:10.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:10.663 06:35:23 -- event/event.sh@38 -- # waitforlisten 54935 /var/tmp/spdk-nbd.sock 00:05:10.663 06:35:23 -- common/autotest_common.sh@829 -- # '[' -z 54935 ']' 00:05:10.663 06:35:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:10.663 06:35:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.663 06:35:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:10.663 06:35:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.663 06:35:23 -- common/autotest_common.sh@10 -- # set +x 00:05:10.663 06:35:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.663 06:35:24 -- common/autotest_common.sh@862 -- # return 0 00:05:10.663 06:35:24 -- event/event.sh@39 -- # killprocess 54935 00:05:10.663 06:35:24 -- common/autotest_common.sh@936 -- # '[' -z 54935 ']' 00:05:10.663 06:35:24 -- common/autotest_common.sh@940 -- # kill -0 54935 00:05:10.663 06:35:24 -- common/autotest_common.sh@941 -- # uname 00:05:10.663 06:35:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:10.663 06:35:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 54935 00:05:10.663 killing process with pid 54935 00:05:10.663 06:35:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:10.663 06:35:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:10.663 06:35:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 54935' 00:05:10.663 06:35:24 -- common/autotest_common.sh@955 -- # kill 54935 00:05:10.663 06:35:24 -- common/autotest_common.sh@960 -- # wait 54935 00:05:10.663 spdk_app_start is called in Round 0. 00:05:10.663 Shutdown signal received, stop current app iteration 00:05:10.663 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:10.663 spdk_app_start is called in Round 1. 00:05:10.663 Shutdown signal received, stop current app iteration 00:05:10.663 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:10.663 spdk_app_start is called in Round 2. 00:05:10.663 Shutdown signal received, stop current app iteration 00:05:10.663 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:10.663 spdk_app_start is called in Round 3. 00:05:10.663 Shutdown signal received, stop current app iteration 00:05:10.663 06:35:24 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:10.663 06:35:24 -- event/event.sh@42 -- # return 0 00:05:10.663 00:05:10.663 real 0m18.827s 00:05:10.663 user 0m42.750s 00:05:10.663 sys 0m2.513s 00:05:10.663 ************************************ 00:05:10.663 END TEST app_repeat 00:05:10.663 ************************************ 00:05:10.663 06:35:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:10.663 06:35:24 -- common/autotest_common.sh@10 -- # set +x 00:05:10.663 06:35:24 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:10.663 06:35:24 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:10.663 06:35:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.663 06:35:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.663 06:35:24 -- common/autotest_common.sh@10 -- # set +x 00:05:10.663 ************************************ 00:05:10.663 START TEST cpu_locks 00:05:10.663 ************************************ 00:05:10.663 06:35:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:10.663 * Looking for test storage... 00:05:10.663 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:10.663 06:35:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:10.663 06:35:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:10.663 06:35:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:10.663 06:35:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:10.663 06:35:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:10.663 06:35:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:10.663 06:35:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:10.663 06:35:24 -- scripts/common.sh@335 -- # IFS=.-: 00:05:10.663 06:35:24 -- scripts/common.sh@335 -- # read -ra ver1 00:05:10.663 06:35:24 -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.663 06:35:24 -- scripts/common.sh@336 -- # read -ra ver2 00:05:10.663 06:35:24 -- scripts/common.sh@337 -- # local 'op=<' 00:05:10.663 06:35:24 -- scripts/common.sh@339 -- # ver1_l=2 00:05:10.663 06:35:24 -- scripts/common.sh@340 -- # ver2_l=1 00:05:10.663 06:35:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:10.663 06:35:24 -- scripts/common.sh@343 -- # case "$op" in 00:05:10.663 06:35:24 -- scripts/common.sh@344 -- # : 1 00:05:10.663 06:35:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:10.663 06:35:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.663 06:35:24 -- scripts/common.sh@364 -- # decimal 1 00:05:10.663 06:35:24 -- scripts/common.sh@352 -- # local d=1 00:05:10.663 06:35:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.663 06:35:24 -- scripts/common.sh@354 -- # echo 1 00:05:10.663 06:35:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:10.663 06:35:24 -- scripts/common.sh@365 -- # decimal 2 00:05:10.663 06:35:24 -- scripts/common.sh@352 -- # local d=2 00:05:10.663 06:35:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.663 06:35:24 -- scripts/common.sh@354 -- # echo 2 00:05:10.663 06:35:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:10.663 06:35:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:10.663 06:35:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:10.663 06:35:24 -- scripts/common.sh@367 -- # return 0 00:05:10.663 06:35:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.663 06:35:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:10.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.663 --rc genhtml_branch_coverage=1 00:05:10.663 --rc genhtml_function_coverage=1 00:05:10.663 --rc genhtml_legend=1 00:05:10.663 --rc geninfo_all_blocks=1 00:05:10.663 --rc geninfo_unexecuted_blocks=1 00:05:10.663 00:05:10.663 ' 00:05:10.663 06:35:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:10.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.663 --rc genhtml_branch_coverage=1 00:05:10.663 --rc genhtml_function_coverage=1 00:05:10.663 --rc genhtml_legend=1 00:05:10.663 --rc geninfo_all_blocks=1 00:05:10.663 --rc geninfo_unexecuted_blocks=1 00:05:10.663 00:05:10.663 ' 00:05:10.663 06:35:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:10.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.663 --rc genhtml_branch_coverage=1 00:05:10.663 --rc genhtml_function_coverage=1 00:05:10.663 --rc genhtml_legend=1 00:05:10.663 --rc geninfo_all_blocks=1 00:05:10.663 --rc geninfo_unexecuted_blocks=1 00:05:10.663 00:05:10.663 ' 00:05:10.663 06:35:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:10.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.663 --rc genhtml_branch_coverage=1 00:05:10.663 --rc genhtml_function_coverage=1 00:05:10.663 --rc genhtml_legend=1 00:05:10.663 --rc geninfo_all_blocks=1 00:05:10.663 --rc geninfo_unexecuted_blocks=1 00:05:10.663 00:05:10.663 ' 00:05:10.663 06:35:24 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:10.663 06:35:24 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:10.663 06:35:24 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:10.663 06:35:24 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:10.663 06:35:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.663 06:35:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.663 06:35:24 -- common/autotest_common.sh@10 -- # set +x 00:05:10.663 ************************************ 00:05:10.663 START TEST default_locks 00:05:10.663 ************************************ 00:05:10.663 06:35:24 -- common/autotest_common.sh@1114 -- # default_locks 00:05:10.663 06:35:24 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=55380 00:05:10.923 06:35:24 -- event/cpu_locks.sh@47 -- # waitforlisten 55380 00:05:10.923 06:35:24 -- common/autotest_common.sh@829 -- # '[' -z 55380 ']' 00:05:10.923 06:35:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.923 06:35:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.923 06:35:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.923 06:35:24 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.923 06:35:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.923 06:35:24 -- common/autotest_common.sh@10 -- # set +x 00:05:10.923 [2024-12-14 06:35:24.713371] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:10.923 [2024-12-14 06:35:24.713509] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55380 ] 00:05:10.923 [2024-12-14 06:35:24.851059] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.923 [2024-12-14 06:35:24.908615] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:10.923 [2024-12-14 06:35:24.908824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.860 06:35:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.860 06:35:25 -- common/autotest_common.sh@862 -- # return 0 00:05:11.860 06:35:25 -- event/cpu_locks.sh@49 -- # locks_exist 55380 00:05:11.860 06:35:25 -- event/cpu_locks.sh@22 -- # lslocks -p 55380 00:05:11.860 06:35:25 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.431 06:35:26 -- event/cpu_locks.sh@50 -- # killprocess 55380 00:05:12.431 06:35:26 -- common/autotest_common.sh@936 -- # '[' -z 55380 ']' 00:05:12.431 06:35:26 -- common/autotest_common.sh@940 -- # kill -0 55380 00:05:12.431 06:35:26 -- common/autotest_common.sh@941 -- # uname 00:05:12.431 06:35:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:12.431 06:35:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55380 00:05:12.431 06:35:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:12.431 killing process with pid 55380 00:05:12.431 06:35:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:12.431 06:35:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55380' 00:05:12.431 06:35:26 -- common/autotest_common.sh@955 -- # kill 55380 00:05:12.431 06:35:26 -- common/autotest_common.sh@960 -- # wait 55380 00:05:12.431 06:35:26 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 55380 00:05:12.431 06:35:26 -- common/autotest_common.sh@650 -- # local es=0 00:05:12.431 06:35:26 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 55380 00:05:12.431 06:35:26 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:12.431 06:35:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:12.431 06:35:26 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:12.431 06:35:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:12.431 06:35:26 -- common/autotest_common.sh@653 -- # waitforlisten 55380 00:05:12.431 06:35:26 -- common/autotest_common.sh@829 -- # '[' -z 55380 ']' 00:05:12.431 06:35:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.431 06:35:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.431 06:35:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.431 06:35:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.431 06:35:26 -- common/autotest_common.sh@10 -- # set +x 00:05:12.431 ERROR: process (pid: 55380) is no longer running 00:05:12.431 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (55380) - No such process 00:05:12.431 06:35:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.431 06:35:26 -- common/autotest_common.sh@862 -- # return 1 00:05:12.431 06:35:26 -- common/autotest_common.sh@653 -- # es=1 00:05:12.431 06:35:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:12.431 06:35:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:12.431 06:35:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:12.431 06:35:26 -- event/cpu_locks.sh@54 -- # no_locks 00:05:12.431 06:35:26 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:12.431 06:35:26 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:12.431 06:35:26 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:12.431 00:05:12.431 real 0m1.764s 00:05:12.431 user 0m2.016s 00:05:12.431 sys 0m0.460s 00:05:12.431 06:35:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:12.431 ************************************ 00:05:12.431 END TEST default_locks 00:05:12.431 ************************************ 00:05:12.431 06:35:26 -- common/autotest_common.sh@10 -- # set +x 00:05:12.691 06:35:26 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:12.691 06:35:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:12.691 06:35:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.691 06:35:26 -- common/autotest_common.sh@10 -- # set +x 00:05:12.691 ************************************ 00:05:12.691 START TEST default_locks_via_rpc 00:05:12.691 ************************************ 00:05:12.691 06:35:26 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:05:12.691 06:35:26 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=55427 00:05:12.691 06:35:26 -- event/cpu_locks.sh@63 -- # waitforlisten 55427 00:05:12.691 06:35:26 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.691 06:35:26 -- common/autotest_common.sh@829 -- # '[' -z 55427 ']' 00:05:12.691 06:35:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.691 06:35:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.691 06:35:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.691 06:35:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.691 06:35:26 -- common/autotest_common.sh@10 -- # set +x 00:05:12.691 [2024-12-14 06:35:26.521600] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:12.691 [2024-12-14 06:35:26.521707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55427 ] 00:05:12.691 [2024-12-14 06:35:26.652293] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.951 [2024-12-14 06:35:26.706265] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:12.951 [2024-12-14 06:35:26.706440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.890 06:35:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.890 06:35:27 -- common/autotest_common.sh@862 -- # return 0 00:05:13.890 06:35:27 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:13.890 06:35:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.890 06:35:27 -- common/autotest_common.sh@10 -- # set +x 00:05:13.890 06:35:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.890 06:35:27 -- event/cpu_locks.sh@67 -- # no_locks 00:05:13.890 06:35:27 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:13.890 06:35:27 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:13.890 06:35:27 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:13.890 06:35:27 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:13.890 06:35:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.890 06:35:27 -- common/autotest_common.sh@10 -- # set +x 00:05:13.890 06:35:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.890 06:35:27 -- event/cpu_locks.sh@71 -- # locks_exist 55427 00:05:13.890 06:35:27 -- event/cpu_locks.sh@22 -- # lslocks -p 55427 00:05:13.890 06:35:27 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:13.890 06:35:27 -- event/cpu_locks.sh@73 -- # killprocess 55427 00:05:13.890 06:35:27 -- common/autotest_common.sh@936 -- # '[' -z 55427 ']' 00:05:13.890 06:35:27 -- common/autotest_common.sh@940 -- # kill -0 55427 00:05:13.891 06:35:27 -- common/autotest_common.sh@941 -- # uname 00:05:13.891 06:35:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:13.891 06:35:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55427 00:05:14.151 06:35:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:14.151 killing process with pid 55427 00:05:14.151 06:35:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:14.151 06:35:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55427' 00:05:14.151 06:35:27 -- common/autotest_common.sh@955 -- # kill 55427 00:05:14.151 06:35:27 -- common/autotest_common.sh@960 -- # wait 55427 00:05:14.412 00:05:14.412 real 0m1.694s 00:05:14.412 user 0m1.974s 00:05:14.412 sys 0m0.389s 00:05:14.412 06:35:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:14.412 06:35:28 -- common/autotest_common.sh@10 -- # set +x 00:05:14.412 ************************************ 00:05:14.412 END TEST default_locks_via_rpc 00:05:14.412 ************************************ 00:05:14.412 06:35:28 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:14.412 06:35:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:14.412 06:35:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.412 06:35:28 -- common/autotest_common.sh@10 -- # set +x 00:05:14.412 ************************************ 00:05:14.412 START TEST non_locking_app_on_locked_coremask 00:05:14.412 ************************************ 00:05:14.412 06:35:28 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:05:14.412 06:35:28 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=55478 00:05:14.412 06:35:28 -- event/cpu_locks.sh@81 -- # waitforlisten 55478 /var/tmp/spdk.sock 00:05:14.412 06:35:28 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.412 06:35:28 -- common/autotest_common.sh@829 -- # '[' -z 55478 ']' 00:05:14.412 06:35:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.412 06:35:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.412 06:35:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.412 06:35:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.412 06:35:28 -- common/autotest_common.sh@10 -- # set +x 00:05:14.412 [2024-12-14 06:35:28.272523] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:14.412 [2024-12-14 06:35:28.273082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55478 ] 00:05:14.672 [2024-12-14 06:35:28.407994] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.672 [2024-12-14 06:35:28.461578] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:14.672 [2024-12-14 06:35:28.461737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.609 06:35:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.609 06:35:29 -- common/autotest_common.sh@862 -- # return 0 00:05:15.609 06:35:29 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=55494 00:05:15.609 06:35:29 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:15.609 06:35:29 -- event/cpu_locks.sh@85 -- # waitforlisten 55494 /var/tmp/spdk2.sock 00:05:15.609 06:35:29 -- common/autotest_common.sh@829 -- # '[' -z 55494 ']' 00:05:15.609 06:35:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.609 06:35:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.609 06:35:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.609 06:35:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.609 06:35:29 -- common/autotest_common.sh@10 -- # set +x 00:05:15.609 [2024-12-14 06:35:29.316498] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:15.609 [2024-12-14 06:35:29.316590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55494 ] 00:05:15.609 [2024-12-14 06:35:29.450481] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:15.609 [2024-12-14 06:35:29.450534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.609 [2024-12-14 06:35:29.557067] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:15.609 [2024-12-14 06:35:29.557310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.547 06:35:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.547 06:35:30 -- common/autotest_common.sh@862 -- # return 0 00:05:16.547 06:35:30 -- event/cpu_locks.sh@87 -- # locks_exist 55478 00:05:16.547 06:35:30 -- event/cpu_locks.sh@22 -- # lslocks -p 55478 00:05:16.547 06:35:30 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:17.116 06:35:31 -- event/cpu_locks.sh@89 -- # killprocess 55478 00:05:17.116 06:35:31 -- common/autotest_common.sh@936 -- # '[' -z 55478 ']' 00:05:17.116 06:35:31 -- common/autotest_common.sh@940 -- # kill -0 55478 00:05:17.116 06:35:31 -- common/autotest_common.sh@941 -- # uname 00:05:17.116 06:35:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:17.116 06:35:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55478 00:05:17.116 killing process with pid 55478 00:05:17.116 06:35:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:17.116 06:35:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:17.116 06:35:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55478' 00:05:17.116 06:35:31 -- common/autotest_common.sh@955 -- # kill 55478 00:05:17.116 06:35:31 -- common/autotest_common.sh@960 -- # wait 55478 00:05:17.685 06:35:31 -- event/cpu_locks.sh@90 -- # killprocess 55494 00:05:17.685 06:35:31 -- common/autotest_common.sh@936 -- # '[' -z 55494 ']' 00:05:17.685 06:35:31 -- common/autotest_common.sh@940 -- # kill -0 55494 00:05:17.685 06:35:31 -- common/autotest_common.sh@941 -- # uname 00:05:17.685 06:35:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:17.685 06:35:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55494 00:05:17.944 06:35:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:17.944 killing process with pid 55494 00:05:17.944 06:35:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:17.944 06:35:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55494' 00:05:17.944 06:35:31 -- common/autotest_common.sh@955 -- # kill 55494 00:05:17.944 06:35:31 -- common/autotest_common.sh@960 -- # wait 55494 00:05:18.204 ************************************ 00:05:18.204 END TEST non_locking_app_on_locked_coremask 00:05:18.204 ************************************ 00:05:18.204 00:05:18.204 real 0m3.771s 00:05:18.204 user 0m4.428s 00:05:18.204 sys 0m0.914s 00:05:18.204 06:35:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:18.204 06:35:31 -- common/autotest_common.sh@10 -- # set +x 00:05:18.204 06:35:32 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:18.204 06:35:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.204 06:35:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.204 06:35:32 -- common/autotest_common.sh@10 -- # set +x 00:05:18.204 ************************************ 00:05:18.204 START TEST locking_app_on_unlocked_coremask 00:05:18.204 ************************************ 00:05:18.204 06:35:32 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:05:18.204 06:35:32 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=55556 00:05:18.204 06:35:32 -- event/cpu_locks.sh@99 -- # waitforlisten 55556 /var/tmp/spdk.sock 00:05:18.204 06:35:32 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:18.204 06:35:32 -- common/autotest_common.sh@829 -- # '[' -z 55556 ']' 00:05:18.204 06:35:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.204 06:35:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.204 06:35:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.204 06:35:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.204 06:35:32 -- common/autotest_common.sh@10 -- # set +x 00:05:18.205 [2024-12-14 06:35:32.100776] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:18.205 [2024-12-14 06:35:32.100921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55556 ] 00:05:18.464 [2024-12-14 06:35:32.234458] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:18.464 [2024-12-14 06:35:32.234500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.464 [2024-12-14 06:35:32.287517] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:18.464 [2024-12-14 06:35:32.287680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.403 06:35:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.403 06:35:33 -- common/autotest_common.sh@862 -- # return 0 00:05:19.403 06:35:33 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=55572 00:05:19.403 06:35:33 -- event/cpu_locks.sh@103 -- # waitforlisten 55572 /var/tmp/spdk2.sock 00:05:19.403 06:35:33 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:19.403 06:35:33 -- common/autotest_common.sh@829 -- # '[' -z 55572 ']' 00:05:19.403 06:35:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:19.403 06:35:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.403 06:35:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:19.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:19.403 06:35:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.403 06:35:33 -- common/autotest_common.sh@10 -- # set +x 00:05:19.403 [2024-12-14 06:35:33.150715] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:19.403 [2024-12-14 06:35:33.151730] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55572 ] 00:05:19.403 [2024-12-14 06:35:33.292239] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.663 [2024-12-14 06:35:33.408437] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:19.663 [2024-12-14 06:35:33.408604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.232 06:35:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.232 06:35:34 -- common/autotest_common.sh@862 -- # return 0 00:05:20.232 06:35:34 -- event/cpu_locks.sh@105 -- # locks_exist 55572 00:05:20.232 06:35:34 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.232 06:35:34 -- event/cpu_locks.sh@22 -- # lslocks -p 55572 00:05:21.170 06:35:35 -- event/cpu_locks.sh@107 -- # killprocess 55556 00:05:21.170 06:35:35 -- common/autotest_common.sh@936 -- # '[' -z 55556 ']' 00:05:21.170 06:35:35 -- common/autotest_common.sh@940 -- # kill -0 55556 00:05:21.170 06:35:35 -- common/autotest_common.sh@941 -- # uname 00:05:21.170 06:35:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:21.170 06:35:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55556 00:05:21.170 killing process with pid 55556 00:05:21.170 06:35:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:21.170 06:35:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:21.170 06:35:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55556' 00:05:21.170 06:35:35 -- common/autotest_common.sh@955 -- # kill 55556 00:05:21.170 06:35:35 -- common/autotest_common.sh@960 -- # wait 55556 00:05:21.738 06:35:35 -- event/cpu_locks.sh@108 -- # killprocess 55572 00:05:21.738 06:35:35 -- common/autotest_common.sh@936 -- # '[' -z 55572 ']' 00:05:21.738 06:35:35 -- common/autotest_common.sh@940 -- # kill -0 55572 00:05:21.738 06:35:35 -- common/autotest_common.sh@941 -- # uname 00:05:21.738 06:35:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:21.738 06:35:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55572 00:05:21.738 killing process with pid 55572 00:05:21.738 06:35:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:21.738 06:35:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:21.738 06:35:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55572' 00:05:21.738 06:35:35 -- common/autotest_common.sh@955 -- # kill 55572 00:05:21.738 06:35:35 -- common/autotest_common.sh@960 -- # wait 55572 00:05:21.997 00:05:21.997 real 0m3.841s 00:05:21.998 user 0m4.562s 00:05:21.998 sys 0m0.912s 00:05:21.998 06:35:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:21.998 ************************************ 00:05:21.998 END TEST locking_app_on_unlocked_coremask 00:05:21.998 ************************************ 00:05:21.998 06:35:35 -- common/autotest_common.sh@10 -- # set +x 00:05:21.998 06:35:35 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:21.998 06:35:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:21.998 06:35:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.998 06:35:35 -- common/autotest_common.sh@10 -- # set +x 00:05:21.998 ************************************ 00:05:21.998 START TEST locking_app_on_locked_coremask 00:05:21.998 ************************************ 00:05:21.998 06:35:35 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:05:21.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.998 06:35:35 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=55634 00:05:21.998 06:35:35 -- event/cpu_locks.sh@116 -- # waitforlisten 55634 /var/tmp/spdk.sock 00:05:21.998 06:35:35 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.998 06:35:35 -- common/autotest_common.sh@829 -- # '[' -z 55634 ']' 00:05:21.998 06:35:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.998 06:35:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.998 06:35:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.998 06:35:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.998 06:35:35 -- common/autotest_common.sh@10 -- # set +x 00:05:22.257 [2024-12-14 06:35:35.990428] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:22.257 [2024-12-14 06:35:35.990531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55634 ] 00:05:22.257 [2024-12-14 06:35:36.123256] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.257 [2024-12-14 06:35:36.174143] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:22.257 [2024-12-14 06:35:36.174329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.196 06:35:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.196 06:35:36 -- common/autotest_common.sh@862 -- # return 0 00:05:23.196 06:35:36 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:23.196 06:35:36 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=55650 00:05:23.196 06:35:36 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 55650 /var/tmp/spdk2.sock 00:05:23.196 06:35:36 -- common/autotest_common.sh@650 -- # local es=0 00:05:23.196 06:35:36 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 55650 /var/tmp/spdk2.sock 00:05:23.196 06:35:36 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:23.196 06:35:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:23.196 06:35:36 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:23.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.196 06:35:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:23.196 06:35:36 -- common/autotest_common.sh@653 -- # waitforlisten 55650 /var/tmp/spdk2.sock 00:05:23.196 06:35:36 -- common/autotest_common.sh@829 -- # '[' -z 55650 ']' 00:05:23.196 06:35:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.196 06:35:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.196 06:35:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.196 06:35:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.196 06:35:36 -- common/autotest_common.sh@10 -- # set +x 00:05:23.196 [2024-12-14 06:35:36.959019] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:23.196 [2024-12-14 06:35:36.959089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55650 ] 00:05:23.196 [2024-12-14 06:35:37.098233] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 55634 has claimed it. 00:05:23.196 [2024-12-14 06:35:37.098324] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:23.765 ERROR: process (pid: 55650) is no longer running 00:05:23.765 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (55650) - No such process 00:05:23.765 06:35:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.765 06:35:37 -- common/autotest_common.sh@862 -- # return 1 00:05:23.765 06:35:37 -- common/autotest_common.sh@653 -- # es=1 00:05:23.765 06:35:37 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:23.765 06:35:37 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:23.765 06:35:37 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:23.765 06:35:37 -- event/cpu_locks.sh@122 -- # locks_exist 55634 00:05:23.765 06:35:37 -- event/cpu_locks.sh@22 -- # lslocks -p 55634 00:05:23.765 06:35:37 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:24.333 06:35:38 -- event/cpu_locks.sh@124 -- # killprocess 55634 00:05:24.333 06:35:38 -- common/autotest_common.sh@936 -- # '[' -z 55634 ']' 00:05:24.333 06:35:38 -- common/autotest_common.sh@940 -- # kill -0 55634 00:05:24.333 06:35:38 -- common/autotest_common.sh@941 -- # uname 00:05:24.333 06:35:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:24.333 06:35:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55634 00:05:24.333 killing process with pid 55634 00:05:24.333 06:35:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:24.333 06:35:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:24.333 06:35:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55634' 00:05:24.333 06:35:38 -- common/autotest_common.sh@955 -- # kill 55634 00:05:24.333 06:35:38 -- common/autotest_common.sh@960 -- # wait 55634 00:05:24.592 00:05:24.592 real 0m2.484s 00:05:24.592 user 0m2.952s 00:05:24.592 sys 0m0.524s 00:05:24.592 06:35:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:24.592 ************************************ 00:05:24.593 END TEST locking_app_on_locked_coremask 00:05:24.593 ************************************ 00:05:24.593 06:35:38 -- common/autotest_common.sh@10 -- # set +x 00:05:24.593 06:35:38 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:24.593 06:35:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:24.593 06:35:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.593 06:35:38 -- common/autotest_common.sh@10 -- # set +x 00:05:24.593 ************************************ 00:05:24.593 START TEST locking_overlapped_coremask 00:05:24.593 ************************************ 00:05:24.593 06:35:38 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:05:24.593 06:35:38 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=55701 00:05:24.593 06:35:38 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:24.593 06:35:38 -- event/cpu_locks.sh@133 -- # waitforlisten 55701 /var/tmp/spdk.sock 00:05:24.593 06:35:38 -- common/autotest_common.sh@829 -- # '[' -z 55701 ']' 00:05:24.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.593 06:35:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.593 06:35:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.593 06:35:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.593 06:35:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.593 06:35:38 -- common/autotest_common.sh@10 -- # set +x 00:05:24.593 [2024-12-14 06:35:38.529215] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:24.593 [2024-12-14 06:35:38.529346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55701 ] 00:05:24.858 [2024-12-14 06:35:38.665532] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:24.858 [2024-12-14 06:35:38.717625] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:24.858 [2024-12-14 06:35:38.718148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.858 [2024-12-14 06:35:38.718959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:24.858 [2024-12-14 06:35:38.718966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.810 06:35:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.810 06:35:39 -- common/autotest_common.sh@862 -- # return 0 00:05:25.810 06:35:39 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=55719 00:05:25.810 06:35:39 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 55719 /var/tmp/spdk2.sock 00:05:25.810 06:35:39 -- common/autotest_common.sh@650 -- # local es=0 00:05:25.810 06:35:39 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:25.810 06:35:39 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 55719 /var/tmp/spdk2.sock 00:05:25.810 06:35:39 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:25.810 06:35:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:25.810 06:35:39 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:25.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.810 06:35:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:25.810 06:35:39 -- common/autotest_common.sh@653 -- # waitforlisten 55719 /var/tmp/spdk2.sock 00:05:25.810 06:35:39 -- common/autotest_common.sh@829 -- # '[' -z 55719 ']' 00:05:25.810 06:35:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.810 06:35:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.810 06:35:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.810 06:35:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.810 06:35:39 -- common/autotest_common.sh@10 -- # set +x 00:05:25.810 [2024-12-14 06:35:39.523335] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:25.810 [2024-12-14 06:35:39.523425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55719 ] 00:05:25.810 [2024-12-14 06:35:39.666544] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 55701 has claimed it. 00:05:25.810 [2024-12-14 06:35:39.670003] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:26.378 ERROR: process (pid: 55719) is no longer running 00:05:26.378 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (55719) - No such process 00:05:26.378 06:35:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.378 06:35:40 -- common/autotest_common.sh@862 -- # return 1 00:05:26.378 06:35:40 -- common/autotest_common.sh@653 -- # es=1 00:05:26.378 06:35:40 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:26.378 06:35:40 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:26.378 06:35:40 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:26.378 06:35:40 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:26.378 06:35:40 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:26.378 06:35:40 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:26.378 06:35:40 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:26.378 06:35:40 -- event/cpu_locks.sh@141 -- # killprocess 55701 00:05:26.378 06:35:40 -- common/autotest_common.sh@936 -- # '[' -z 55701 ']' 00:05:26.378 06:35:40 -- common/autotest_common.sh@940 -- # kill -0 55701 00:05:26.378 06:35:40 -- common/autotest_common.sh@941 -- # uname 00:05:26.378 06:35:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:26.378 06:35:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55701 00:05:26.378 killing process with pid 55701 00:05:26.378 06:35:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:26.378 06:35:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:26.378 06:35:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55701' 00:05:26.378 06:35:40 -- common/autotest_common.sh@955 -- # kill 55701 00:05:26.378 06:35:40 -- common/autotest_common.sh@960 -- # wait 55701 00:05:26.637 ************************************ 00:05:26.637 END TEST locking_overlapped_coremask 00:05:26.637 ************************************ 00:05:26.637 00:05:26.637 real 0m2.083s 00:05:26.637 user 0m5.959s 00:05:26.637 sys 0m0.315s 00:05:26.637 06:35:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:26.637 06:35:40 -- common/autotest_common.sh@10 -- # set +x 00:05:26.637 06:35:40 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:26.637 06:35:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:26.637 06:35:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.637 06:35:40 -- common/autotest_common.sh@10 -- # set +x 00:05:26.637 ************************************ 00:05:26.637 START TEST locking_overlapped_coremask_via_rpc 00:05:26.637 ************************************ 00:05:26.637 06:35:40 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:05:26.637 06:35:40 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=55759 00:05:26.637 06:35:40 -- event/cpu_locks.sh@149 -- # waitforlisten 55759 /var/tmp/spdk.sock 00:05:26.637 06:35:40 -- common/autotest_common.sh@829 -- # '[' -z 55759 ']' 00:05:26.637 06:35:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.637 06:35:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.637 06:35:40 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:26.637 06:35:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.637 06:35:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.637 06:35:40 -- common/autotest_common.sh@10 -- # set +x 00:05:26.896 [2024-12-14 06:35:40.650803] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:26.896 [2024-12-14 06:35:40.651457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55759 ] 00:05:26.896 [2024-12-14 06:35:40.783719] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:26.896 [2024-12-14 06:35:40.783761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:26.896 [2024-12-14 06:35:40.835999] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:26.896 [2024-12-14 06:35:40.836238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.896 [2024-12-14 06:35:40.836983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.896 [2024-12-14 06:35:40.836993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.832 06:35:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.832 06:35:41 -- common/autotest_common.sh@862 -- # return 0 00:05:27.832 06:35:41 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=55777 00:05:27.832 06:35:41 -- event/cpu_locks.sh@153 -- # waitforlisten 55777 /var/tmp/spdk2.sock 00:05:27.832 06:35:41 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:27.832 06:35:41 -- common/autotest_common.sh@829 -- # '[' -z 55777 ']' 00:05:27.832 06:35:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.832 06:35:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.832 06:35:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.832 06:35:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.832 06:35:41 -- common/autotest_common.sh@10 -- # set +x 00:05:27.832 [2024-12-14 06:35:41.648618] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:27.832 [2024-12-14 06:35:41.648928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55777 ] 00:05:27.832 [2024-12-14 06:35:41.790328] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:27.832 [2024-12-14 06:35:41.790382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:28.091 [2024-12-14 06:35:41.893379] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:28.091 [2024-12-14 06:35:41.893710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:28.091 [2024-12-14 06:35:41.898028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.091 [2024-12-14 06:35:41.898029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:28.659 06:35:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.659 06:35:42 -- common/autotest_common.sh@862 -- # return 0 00:05:28.659 06:35:42 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:28.659 06:35:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.659 06:35:42 -- common/autotest_common.sh@10 -- # set +x 00:05:28.659 06:35:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.659 06:35:42 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:28.659 06:35:42 -- common/autotest_common.sh@650 -- # local es=0 00:05:28.659 06:35:42 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:28.659 06:35:42 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:28.659 06:35:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:28.659 06:35:42 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:28.659 06:35:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:28.659 06:35:42 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:28.659 06:35:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.659 06:35:42 -- common/autotest_common.sh@10 -- # set +x 00:05:28.659 [2024-12-14 06:35:42.642054] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 55759 has claimed it. 00:05:28.918 request: 00:05:28.918 { 00:05:28.918 "method": "framework_enable_cpumask_locks", 00:05:28.918 "req_id": 1 00:05:28.918 } 00:05:28.918 Got JSON-RPC error response 00:05:28.918 response: 00:05:28.918 { 00:05:28.918 "code": -32603, 00:05:28.918 "message": "Failed to claim CPU core: 2" 00:05:28.918 } 00:05:28.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.918 06:35:42 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:28.918 06:35:42 -- common/autotest_common.sh@653 -- # es=1 00:05:28.918 06:35:42 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:28.918 06:35:42 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:28.918 06:35:42 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:28.918 06:35:42 -- event/cpu_locks.sh@158 -- # waitforlisten 55759 /var/tmp/spdk.sock 00:05:28.918 06:35:42 -- common/autotest_common.sh@829 -- # '[' -z 55759 ']' 00:05:28.918 06:35:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.918 06:35:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.918 06:35:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.918 06:35:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.918 06:35:42 -- common/autotest_common.sh@10 -- # set +x 00:05:29.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.177 06:35:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.177 06:35:42 -- common/autotest_common.sh@862 -- # return 0 00:05:29.177 06:35:42 -- event/cpu_locks.sh@159 -- # waitforlisten 55777 /var/tmp/spdk2.sock 00:05:29.177 06:35:42 -- common/autotest_common.sh@829 -- # '[' -z 55777 ']' 00:05:29.177 06:35:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.177 06:35:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.177 06:35:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.177 06:35:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.177 06:35:42 -- common/autotest_common.sh@10 -- # set +x 00:05:29.435 ************************************ 00:05:29.435 END TEST locking_overlapped_coremask_via_rpc 00:05:29.435 ************************************ 00:05:29.435 06:35:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.435 06:35:43 -- common/autotest_common.sh@862 -- # return 0 00:05:29.435 06:35:43 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:29.435 06:35:43 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:29.435 06:35:43 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:29.435 06:35:43 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:29.436 00:05:29.436 real 0m2.583s 00:05:29.436 user 0m1.370s 00:05:29.436 sys 0m0.152s 00:05:29.436 06:35:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:29.436 06:35:43 -- common/autotest_common.sh@10 -- # set +x 00:05:29.436 06:35:43 -- event/cpu_locks.sh@174 -- # cleanup 00:05:29.436 06:35:43 -- event/cpu_locks.sh@15 -- # [[ -z 55759 ]] 00:05:29.436 06:35:43 -- event/cpu_locks.sh@15 -- # killprocess 55759 00:05:29.436 06:35:43 -- common/autotest_common.sh@936 -- # '[' -z 55759 ']' 00:05:29.436 06:35:43 -- common/autotest_common.sh@940 -- # kill -0 55759 00:05:29.436 06:35:43 -- common/autotest_common.sh@941 -- # uname 00:05:29.436 06:35:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:29.436 06:35:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55759 00:05:29.436 killing process with pid 55759 00:05:29.436 06:35:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:29.436 06:35:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:29.436 06:35:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55759' 00:05:29.436 06:35:43 -- common/autotest_common.sh@955 -- # kill 55759 00:05:29.436 06:35:43 -- common/autotest_common.sh@960 -- # wait 55759 00:05:29.694 06:35:43 -- event/cpu_locks.sh@16 -- # [[ -z 55777 ]] 00:05:29.694 06:35:43 -- event/cpu_locks.sh@16 -- # killprocess 55777 00:05:29.694 06:35:43 -- common/autotest_common.sh@936 -- # '[' -z 55777 ']' 00:05:29.694 06:35:43 -- common/autotest_common.sh@940 -- # kill -0 55777 00:05:29.694 06:35:43 -- common/autotest_common.sh@941 -- # uname 00:05:29.694 06:35:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:29.694 06:35:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 55777 00:05:29.694 killing process with pid 55777 00:05:29.694 06:35:43 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:29.694 06:35:43 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:29.694 06:35:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 55777' 00:05:29.694 06:35:43 -- common/autotest_common.sh@955 -- # kill 55777 00:05:29.694 06:35:43 -- common/autotest_common.sh@960 -- # wait 55777 00:05:29.952 06:35:43 -- event/cpu_locks.sh@18 -- # rm -f 00:05:29.952 06:35:43 -- event/cpu_locks.sh@1 -- # cleanup 00:05:29.952 06:35:43 -- event/cpu_locks.sh@15 -- # [[ -z 55759 ]] 00:05:29.952 06:35:43 -- event/cpu_locks.sh@15 -- # killprocess 55759 00:05:29.952 06:35:43 -- common/autotest_common.sh@936 -- # '[' -z 55759 ']' 00:05:29.952 Process with pid 55759 is not found 00:05:29.952 Process with pid 55777 is not found 00:05:29.952 06:35:43 -- common/autotest_common.sh@940 -- # kill -0 55759 00:05:29.952 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (55759) - No such process 00:05:29.952 06:35:43 -- common/autotest_common.sh@963 -- # echo 'Process with pid 55759 is not found' 00:05:29.952 06:35:43 -- event/cpu_locks.sh@16 -- # [[ -z 55777 ]] 00:05:29.952 06:35:43 -- event/cpu_locks.sh@16 -- # killprocess 55777 00:05:29.952 06:35:43 -- common/autotest_common.sh@936 -- # '[' -z 55777 ']' 00:05:29.952 06:35:43 -- common/autotest_common.sh@940 -- # kill -0 55777 00:05:29.952 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (55777) - No such process 00:05:29.952 06:35:43 -- common/autotest_common.sh@963 -- # echo 'Process with pid 55777 is not found' 00:05:29.953 06:35:43 -- event/cpu_locks.sh@18 -- # rm -f 00:05:29.953 00:05:29.953 real 0m19.383s 00:05:29.953 user 0m35.267s 00:05:29.953 sys 0m4.328s 00:05:29.953 06:35:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:29.953 06:35:43 -- common/autotest_common.sh@10 -- # set +x 00:05:29.953 ************************************ 00:05:29.953 END TEST cpu_locks 00:05:29.953 ************************************ 00:05:29.953 00:05:29.953 real 0m44.923s 00:05:29.953 user 1m27.187s 00:05:29.953 sys 0m7.542s 00:05:29.953 06:35:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:29.953 06:35:43 -- common/autotest_common.sh@10 -- # set +x 00:05:29.953 ************************************ 00:05:29.953 END TEST event 00:05:29.953 ************************************ 00:05:29.953 06:35:43 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:29.953 06:35:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:29.953 06:35:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.953 06:35:43 -- common/autotest_common.sh@10 -- # set +x 00:05:29.953 ************************************ 00:05:29.953 START TEST thread 00:05:29.953 ************************************ 00:05:29.953 06:35:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:30.212 * Looking for test storage... 00:05:30.212 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:30.212 06:35:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:30.212 06:35:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:30.212 06:35:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:30.212 06:35:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:30.212 06:35:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:30.212 06:35:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:30.212 06:35:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:30.212 06:35:44 -- scripts/common.sh@335 -- # IFS=.-: 00:05:30.212 06:35:44 -- scripts/common.sh@335 -- # read -ra ver1 00:05:30.212 06:35:44 -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.212 06:35:44 -- scripts/common.sh@336 -- # read -ra ver2 00:05:30.212 06:35:44 -- scripts/common.sh@337 -- # local 'op=<' 00:05:30.212 06:35:44 -- scripts/common.sh@339 -- # ver1_l=2 00:05:30.212 06:35:44 -- scripts/common.sh@340 -- # ver2_l=1 00:05:30.212 06:35:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:30.212 06:35:44 -- scripts/common.sh@343 -- # case "$op" in 00:05:30.212 06:35:44 -- scripts/common.sh@344 -- # : 1 00:05:30.212 06:35:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:30.212 06:35:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.212 06:35:44 -- scripts/common.sh@364 -- # decimal 1 00:05:30.212 06:35:44 -- scripts/common.sh@352 -- # local d=1 00:05:30.212 06:35:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.212 06:35:44 -- scripts/common.sh@354 -- # echo 1 00:05:30.212 06:35:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:30.212 06:35:44 -- scripts/common.sh@365 -- # decimal 2 00:05:30.212 06:35:44 -- scripts/common.sh@352 -- # local d=2 00:05:30.212 06:35:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.212 06:35:44 -- scripts/common.sh@354 -- # echo 2 00:05:30.212 06:35:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:30.212 06:35:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:30.212 06:35:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:30.212 06:35:44 -- scripts/common.sh@367 -- # return 0 00:05:30.212 06:35:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.212 06:35:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:30.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.212 --rc genhtml_branch_coverage=1 00:05:30.212 --rc genhtml_function_coverage=1 00:05:30.212 --rc genhtml_legend=1 00:05:30.212 --rc geninfo_all_blocks=1 00:05:30.212 --rc geninfo_unexecuted_blocks=1 00:05:30.212 00:05:30.212 ' 00:05:30.212 06:35:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:30.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.212 --rc genhtml_branch_coverage=1 00:05:30.212 --rc genhtml_function_coverage=1 00:05:30.212 --rc genhtml_legend=1 00:05:30.212 --rc geninfo_all_blocks=1 00:05:30.212 --rc geninfo_unexecuted_blocks=1 00:05:30.212 00:05:30.212 ' 00:05:30.212 06:35:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:30.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.212 --rc genhtml_branch_coverage=1 00:05:30.212 --rc genhtml_function_coverage=1 00:05:30.212 --rc genhtml_legend=1 00:05:30.212 --rc geninfo_all_blocks=1 00:05:30.212 --rc geninfo_unexecuted_blocks=1 00:05:30.212 00:05:30.212 ' 00:05:30.212 06:35:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:30.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.212 --rc genhtml_branch_coverage=1 00:05:30.212 --rc genhtml_function_coverage=1 00:05:30.212 --rc genhtml_legend=1 00:05:30.212 --rc geninfo_all_blocks=1 00:05:30.212 --rc geninfo_unexecuted_blocks=1 00:05:30.212 00:05:30.212 ' 00:05:30.212 06:35:44 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:30.212 06:35:44 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:30.212 06:35:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.212 06:35:44 -- common/autotest_common.sh@10 -- # set +x 00:05:30.212 ************************************ 00:05:30.212 START TEST thread_poller_perf 00:05:30.212 ************************************ 00:05:30.212 06:35:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:30.212 [2024-12-14 06:35:44.126534] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:30.212 [2024-12-14 06:35:44.126769] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55901 ] 00:05:30.471 [2024-12-14 06:35:44.258152] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.471 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:30.471 [2024-12-14 06:35:44.306996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.848 [2024-12-14T06:35:45.840Z] ====================================== 00:05:31.848 [2024-12-14T06:35:45.840Z] busy:2208552480 (cyc) 00:05:31.848 [2024-12-14T06:35:45.840Z] total_run_count: 360000 00:05:31.848 [2024-12-14T06:35:45.840Z] tsc_hz: 2200000000 (cyc) 00:05:31.848 [2024-12-14T06:35:45.840Z] ====================================== 00:05:31.848 [2024-12-14T06:35:45.840Z] poller_cost: 6134 (cyc), 2788 (nsec) 00:05:31.848 ************************************ 00:05:31.848 END TEST thread_poller_perf 00:05:31.848 ************************************ 00:05:31.848 00:05:31.848 real 0m1.294s 00:05:31.848 user 0m1.149s 00:05:31.848 sys 0m0.038s 00:05:31.848 06:35:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:31.848 06:35:45 -- common/autotest_common.sh@10 -- # set +x 00:05:31.848 06:35:45 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:31.848 06:35:45 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:31.848 06:35:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.848 06:35:45 -- common/autotest_common.sh@10 -- # set +x 00:05:31.848 ************************************ 00:05:31.848 START TEST thread_poller_perf 00:05:31.848 ************************************ 00:05:31.848 06:35:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:31.848 [2024-12-14 06:35:45.473433] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:31.848 [2024-12-14 06:35:45.473532] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55936 ] 00:05:31.848 [2024-12-14 06:35:45.610116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.848 [2024-12-14 06:35:45.658789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.848 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:32.784 [2024-12-14T06:35:46.776Z] ====================================== 00:05:32.784 [2024-12-14T06:35:46.776Z] busy:2202579668 (cyc) 00:05:32.784 [2024-12-14T06:35:46.776Z] total_run_count: 4980000 00:05:32.784 [2024-12-14T06:35:46.776Z] tsc_hz: 2200000000 (cyc) 00:05:32.784 [2024-12-14T06:35:46.776Z] ====================================== 00:05:32.784 [2024-12-14T06:35:46.776Z] poller_cost: 442 (cyc), 200 (nsec) 00:05:32.784 00:05:32.784 real 0m1.287s 00:05:32.784 user 0m1.140s 00:05:32.784 sys 0m0.041s 00:05:32.784 ************************************ 00:05:32.784 END TEST thread_poller_perf 00:05:32.784 ************************************ 00:05:32.784 06:35:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:32.784 06:35:46 -- common/autotest_common.sh@10 -- # set +x 00:05:33.043 06:35:46 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:33.043 ************************************ 00:05:33.043 END TEST thread 00:05:33.043 ************************************ 00:05:33.043 00:05:33.043 real 0m2.845s 00:05:33.043 user 0m2.412s 00:05:33.043 sys 0m0.215s 00:05:33.043 06:35:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.043 06:35:46 -- common/autotest_common.sh@10 -- # set +x 00:05:33.043 06:35:46 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:33.043 06:35:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.043 06:35:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.043 06:35:46 -- common/autotest_common.sh@10 -- # set +x 00:05:33.043 ************************************ 00:05:33.043 START TEST accel 00:05:33.043 ************************************ 00:05:33.043 06:35:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:33.043 * Looking for test storage... 00:05:33.043 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:33.043 06:35:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:33.043 06:35:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:33.043 06:35:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:33.043 06:35:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:33.043 06:35:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:33.043 06:35:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:33.043 06:35:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:33.043 06:35:46 -- scripts/common.sh@335 -- # IFS=.-: 00:05:33.043 06:35:46 -- scripts/common.sh@335 -- # read -ra ver1 00:05:33.043 06:35:46 -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.043 06:35:46 -- scripts/common.sh@336 -- # read -ra ver2 00:05:33.043 06:35:46 -- scripts/common.sh@337 -- # local 'op=<' 00:05:33.043 06:35:46 -- scripts/common.sh@339 -- # ver1_l=2 00:05:33.043 06:35:46 -- scripts/common.sh@340 -- # ver2_l=1 00:05:33.043 06:35:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:33.043 06:35:46 -- scripts/common.sh@343 -- # case "$op" in 00:05:33.043 06:35:46 -- scripts/common.sh@344 -- # : 1 00:05:33.043 06:35:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:33.043 06:35:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.043 06:35:47 -- scripts/common.sh@364 -- # decimal 1 00:05:33.043 06:35:47 -- scripts/common.sh@352 -- # local d=1 00:05:33.043 06:35:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.043 06:35:47 -- scripts/common.sh@354 -- # echo 1 00:05:33.043 06:35:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:33.043 06:35:47 -- scripts/common.sh@365 -- # decimal 2 00:05:33.043 06:35:47 -- scripts/common.sh@352 -- # local d=2 00:05:33.043 06:35:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.043 06:35:47 -- scripts/common.sh@354 -- # echo 2 00:05:33.043 06:35:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:33.044 06:35:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:33.044 06:35:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:33.044 06:35:47 -- scripts/common.sh@367 -- # return 0 00:05:33.044 06:35:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.044 06:35:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:33.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.044 --rc genhtml_branch_coverage=1 00:05:33.044 --rc genhtml_function_coverage=1 00:05:33.044 --rc genhtml_legend=1 00:05:33.044 --rc geninfo_all_blocks=1 00:05:33.044 --rc geninfo_unexecuted_blocks=1 00:05:33.044 00:05:33.044 ' 00:05:33.044 06:35:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:33.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.044 --rc genhtml_branch_coverage=1 00:05:33.044 --rc genhtml_function_coverage=1 00:05:33.044 --rc genhtml_legend=1 00:05:33.044 --rc geninfo_all_blocks=1 00:05:33.044 --rc geninfo_unexecuted_blocks=1 00:05:33.044 00:05:33.044 ' 00:05:33.044 06:35:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:33.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.044 --rc genhtml_branch_coverage=1 00:05:33.044 --rc genhtml_function_coverage=1 00:05:33.044 --rc genhtml_legend=1 00:05:33.044 --rc geninfo_all_blocks=1 00:05:33.044 --rc geninfo_unexecuted_blocks=1 00:05:33.044 00:05:33.044 ' 00:05:33.044 06:35:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:33.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.044 --rc genhtml_branch_coverage=1 00:05:33.044 --rc genhtml_function_coverage=1 00:05:33.044 --rc genhtml_legend=1 00:05:33.044 --rc geninfo_all_blocks=1 00:05:33.044 --rc geninfo_unexecuted_blocks=1 00:05:33.044 00:05:33.044 ' 00:05:33.044 06:35:47 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:33.044 06:35:47 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:33.044 06:35:47 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:33.044 06:35:47 -- accel/accel.sh@59 -- # spdk_tgt_pid=56018 00:05:33.044 06:35:47 -- accel/accel.sh@60 -- # waitforlisten 56018 00:05:33.044 06:35:47 -- common/autotest_common.sh@829 -- # '[' -z 56018 ']' 00:05:33.044 06:35:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.044 06:35:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.044 06:35:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.044 06:35:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.044 06:35:47 -- common/autotest_common.sh@10 -- # set +x 00:05:33.044 06:35:47 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:33.044 06:35:47 -- accel/accel.sh@58 -- # build_accel_config 00:05:33.044 06:35:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:33.044 06:35:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.044 06:35:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.044 06:35:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:33.044 06:35:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:33.044 06:35:47 -- accel/accel.sh@41 -- # local IFS=, 00:05:33.044 06:35:47 -- accel/accel.sh@42 -- # jq -r . 00:05:33.390 [2024-12-14 06:35:47.076072] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:33.390 [2024-12-14 06:35:47.076191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56018 ] 00:05:33.390 [2024-12-14 06:35:47.214185] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.390 [2024-12-14 06:35:47.269061] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:33.390 [2024-12-14 06:35:47.269238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.327 06:35:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.327 06:35:48 -- common/autotest_common.sh@862 -- # return 0 00:05:34.327 06:35:48 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:34.327 06:35:48 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:34.327 06:35:48 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:34.327 06:35:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.327 06:35:48 -- common/autotest_common.sh@10 -- # set +x 00:05:34.327 06:35:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.327 06:35:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.327 06:35:48 -- accel/accel.sh@64 -- # IFS== 00:05:34.327 06:35:48 -- accel/accel.sh@64 -- # read -r opc module 00:05:34.327 06:35:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:34.327 06:35:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.327 06:35:48 -- accel/accel.sh@64 -- # IFS== 00:05:34.327 06:35:48 -- accel/accel.sh@64 -- # read -r opc module 00:05:34.327 06:35:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:34.327 06:35:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.327 06:35:48 -- accel/accel.sh@64 -- # IFS== 00:05:34.327 06:35:48 -- accel/accel.sh@64 -- # read -r opc module 00:05:34.327 06:35:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:34.327 06:35:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.327 06:35:48 -- accel/accel.sh@64 -- # IFS== 00:05:34.327 06:35:48 -- accel/accel.sh@64 -- # read -r opc module 00:05:34.327 06:35:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:34.327 06:35:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.327 06:35:48 -- accel/accel.sh@64 -- # IFS== 00:05:34.327 06:35:48 -- accel/accel.sh@64 -- # read -r opc module 00:05:34.327 06:35:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:34.327 06:35:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.327 06:35:48 -- accel/accel.sh@64 -- # IFS== 00:05:34.327 06:35:48 -- accel/accel.sh@64 -- # read -r opc module 00:05:34.327 06:35:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:34.327 06:35:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.327 06:35:48 -- accel/accel.sh@64 -- # IFS== 00:05:34.327 06:35:48 -- accel/accel.sh@64 -- # read -r opc module 00:05:34.327 06:35:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:34.327 06:35:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.327 06:35:48 -- accel/accel.sh@64 -- # IFS== 00:05:34.327 06:35:48 -- accel/accel.sh@64 -- # read -r opc module 00:05:34.327 06:35:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:34.327 06:35:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.327 06:35:48 -- accel/accel.sh@64 -- # IFS== 00:05:34.327 06:35:48 -- accel/accel.sh@64 -- # read -r opc module 00:05:34.327 06:35:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:34.327 06:35:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.327 06:35:48 -- accel/accel.sh@64 -- # IFS== 00:05:34.327 06:35:48 -- accel/accel.sh@64 -- # read -r opc module 00:05:34.327 06:35:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:34.327 06:35:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.327 06:35:48 -- accel/accel.sh@64 -- # IFS== 00:05:34.327 06:35:48 -- accel/accel.sh@64 -- # read -r opc module 00:05:34.327 06:35:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:34.327 06:35:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.327 06:35:48 -- accel/accel.sh@64 -- # IFS== 00:05:34.327 06:35:48 -- accel/accel.sh@64 -- # read -r opc module 00:05:34.327 06:35:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:34.327 06:35:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.327 06:35:48 -- accel/accel.sh@64 -- # IFS== 00:05:34.327 06:35:48 -- accel/accel.sh@64 -- # read -r opc module 00:05:34.327 06:35:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:34.327 06:35:48 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:34.327 06:35:48 -- accel/accel.sh@64 -- # IFS== 00:05:34.327 06:35:48 -- accel/accel.sh@64 -- # read -r opc module 00:05:34.327 06:35:48 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:34.327 06:35:48 -- accel/accel.sh@67 -- # killprocess 56018 00:05:34.327 06:35:48 -- common/autotest_common.sh@936 -- # '[' -z 56018 ']' 00:05:34.327 06:35:48 -- common/autotest_common.sh@940 -- # kill -0 56018 00:05:34.327 06:35:48 -- common/autotest_common.sh@941 -- # uname 00:05:34.327 06:35:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:34.327 06:35:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56018 00:05:34.327 06:35:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:34.327 06:35:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:34.327 killing process with pid 56018 00:05:34.327 06:35:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56018' 00:05:34.327 06:35:48 -- common/autotest_common.sh@955 -- # kill 56018 00:05:34.327 06:35:48 -- common/autotest_common.sh@960 -- # wait 56018 00:05:34.587 06:35:48 -- accel/accel.sh@68 -- # trap - ERR 00:05:34.587 06:35:48 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:34.587 06:35:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:34.587 06:35:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.587 06:35:48 -- common/autotest_common.sh@10 -- # set +x 00:05:34.587 06:35:48 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:05:34.587 06:35:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:34.587 06:35:48 -- accel/accel.sh@12 -- # build_accel_config 00:05:34.587 06:35:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:34.587 06:35:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.587 06:35:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.587 06:35:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:34.587 06:35:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:34.587 06:35:48 -- accel/accel.sh@41 -- # local IFS=, 00:05:34.587 06:35:48 -- accel/accel.sh@42 -- # jq -r . 00:05:34.587 06:35:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:34.587 06:35:48 -- common/autotest_common.sh@10 -- # set +x 00:05:34.587 06:35:48 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:34.587 06:35:48 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:34.587 06:35:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.587 06:35:48 -- common/autotest_common.sh@10 -- # set +x 00:05:34.587 ************************************ 00:05:34.587 START TEST accel_missing_filename 00:05:34.587 ************************************ 00:05:34.587 06:35:48 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:05:34.587 06:35:48 -- common/autotest_common.sh@650 -- # local es=0 00:05:34.587 06:35:48 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:34.587 06:35:48 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:34.587 06:35:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:34.587 06:35:48 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:34.587 06:35:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:34.587 06:35:48 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:05:34.587 06:35:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:34.587 06:35:48 -- accel/accel.sh@12 -- # build_accel_config 00:05:34.587 06:35:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:34.587 06:35:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.587 06:35:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.587 06:35:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:34.587 06:35:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:34.587 06:35:48 -- accel/accel.sh@41 -- # local IFS=, 00:05:34.587 06:35:48 -- accel/accel.sh@42 -- # jq -r . 00:05:34.587 [2024-12-14 06:35:48.483734] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:34.587 [2024-12-14 06:35:48.483836] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56064 ] 00:05:34.846 [2024-12-14 06:35:48.620506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.846 [2024-12-14 06:35:48.670151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.846 [2024-12-14 06:35:48.698921] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:34.846 [2024-12-14 06:35:48.738468] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:34.846 A filename is required. 00:05:34.846 06:35:48 -- common/autotest_common.sh@653 -- # es=234 00:05:34.846 06:35:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:34.846 06:35:48 -- common/autotest_common.sh@662 -- # es=106 00:05:34.846 06:35:48 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:34.846 06:35:48 -- common/autotest_common.sh@670 -- # es=1 00:05:34.846 06:35:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:34.846 00:05:34.846 real 0m0.368s 00:05:34.846 user 0m0.239s 00:05:34.846 sys 0m0.063s 00:05:34.846 06:35:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:34.846 06:35:48 -- common/autotest_common.sh@10 -- # set +x 00:05:34.846 ************************************ 00:05:34.846 END TEST accel_missing_filename 00:05:34.846 ************************************ 00:05:35.106 06:35:48 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:35.106 06:35:48 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:35.106 06:35:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.106 06:35:48 -- common/autotest_common.sh@10 -- # set +x 00:05:35.106 ************************************ 00:05:35.106 START TEST accel_compress_verify 00:05:35.106 ************************************ 00:05:35.106 06:35:48 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:35.106 06:35:48 -- common/autotest_common.sh@650 -- # local es=0 00:05:35.106 06:35:48 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:35.106 06:35:48 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:35.106 06:35:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.106 06:35:48 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:35.106 06:35:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.106 06:35:48 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:35.106 06:35:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:35.106 06:35:48 -- accel/accel.sh@12 -- # build_accel_config 00:05:35.106 06:35:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:35.106 06:35:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.106 06:35:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.106 06:35:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:35.106 06:35:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:35.106 06:35:48 -- accel/accel.sh@41 -- # local IFS=, 00:05:35.106 06:35:48 -- accel/accel.sh@42 -- # jq -r . 00:05:35.107 [2024-12-14 06:35:48.901450] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:35.107 [2024-12-14 06:35:48.901544] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56094 ] 00:05:35.107 [2024-12-14 06:35:49.038361] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.107 [2024-12-14 06:35:49.092382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.366 [2024-12-14 06:35:49.123370] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:35.366 [2024-12-14 06:35:49.163094] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:35.366 00:05:35.366 Compression does not support the verify option, aborting. 00:05:35.366 06:35:49 -- common/autotest_common.sh@653 -- # es=161 00:05:35.366 06:35:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:35.366 06:35:49 -- common/autotest_common.sh@662 -- # es=33 00:05:35.366 06:35:49 -- common/autotest_common.sh@663 -- # case "$es" in 00:05:35.366 06:35:49 -- common/autotest_common.sh@670 -- # es=1 00:05:35.366 ************************************ 00:05:35.366 END TEST accel_compress_verify 00:05:35.366 ************************************ 00:05:35.366 06:35:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:35.366 00:05:35.366 real 0m0.373s 00:05:35.366 user 0m0.251s 00:05:35.366 sys 0m0.067s 00:05:35.366 06:35:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:35.366 06:35:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.366 06:35:49 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:35.366 06:35:49 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:35.366 06:35:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.366 06:35:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.366 ************************************ 00:05:35.366 START TEST accel_wrong_workload 00:05:35.366 ************************************ 00:05:35.366 06:35:49 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:05:35.366 06:35:49 -- common/autotest_common.sh@650 -- # local es=0 00:05:35.366 06:35:49 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:35.366 06:35:49 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:35.366 06:35:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.366 06:35:49 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:35.366 06:35:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.366 06:35:49 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:05:35.366 06:35:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:35.366 06:35:49 -- accel/accel.sh@12 -- # build_accel_config 00:05:35.366 06:35:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:35.366 06:35:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.366 06:35:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.366 06:35:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:35.366 06:35:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:35.366 06:35:49 -- accel/accel.sh@41 -- # local IFS=, 00:05:35.366 06:35:49 -- accel/accel.sh@42 -- # jq -r . 00:05:35.366 Unsupported workload type: foobar 00:05:35.366 [2024-12-14 06:35:49.329443] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:35.366 accel_perf options: 00:05:35.366 [-h help message] 00:05:35.366 [-q queue depth per core] 00:05:35.366 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:35.366 [-T number of threads per core 00:05:35.366 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:35.366 [-t time in seconds] 00:05:35.366 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:35.366 [ dif_verify, , dif_generate, dif_generate_copy 00:05:35.366 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:35.366 [-l for compress/decompress workloads, name of uncompressed input file 00:05:35.366 [-S for crc32c workload, use this seed value (default 0) 00:05:35.366 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:35.366 [-f for fill workload, use this BYTE value (default 255) 00:05:35.366 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:35.366 [-y verify result if this switch is on] 00:05:35.366 [-a tasks to allocate per core (default: same value as -q)] 00:05:35.366 Can be used to spread operations across a wider range of memory. 00:05:35.366 06:35:49 -- common/autotest_common.sh@653 -- # es=1 00:05:35.366 06:35:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:35.366 06:35:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:35.366 06:35:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:35.366 00:05:35.366 real 0m0.033s 00:05:35.366 user 0m0.015s 00:05:35.366 sys 0m0.016s 00:05:35.366 06:35:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:35.366 06:35:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.366 ************************************ 00:05:35.366 END TEST accel_wrong_workload 00:05:35.366 ************************************ 00:05:35.626 06:35:49 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:35.626 06:35:49 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:35.626 06:35:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.626 06:35:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.626 ************************************ 00:05:35.626 START TEST accel_negative_buffers 00:05:35.626 ************************************ 00:05:35.626 06:35:49 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:35.626 06:35:49 -- common/autotest_common.sh@650 -- # local es=0 00:05:35.626 06:35:49 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:35.626 06:35:49 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:05:35.626 06:35:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.626 06:35:49 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:05:35.626 06:35:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.626 06:35:49 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:05:35.626 06:35:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:35.626 06:35:49 -- accel/accel.sh@12 -- # build_accel_config 00:05:35.626 06:35:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:35.626 06:35:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.626 06:35:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.626 06:35:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:35.626 06:35:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:35.626 06:35:49 -- accel/accel.sh@41 -- # local IFS=, 00:05:35.626 06:35:49 -- accel/accel.sh@42 -- # jq -r . 00:05:35.626 -x option must be non-negative. 00:05:35.626 accel_perf options: 00:05:35.626 [-h help message] 00:05:35.626 [-q queue depth per core] 00:05:35.626 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:35.626 [-T number of threads per core 00:05:35.626 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:35.626 [-t time in seconds] 00:05:35.626 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:35.626 [ dif_verify, , dif_generate, dif_generate_copy 00:05:35.626 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:35.626 [-l for compress/decompress workloads, name of uncompressed input file 00:05:35.626 [-S for crc32c workload, use this seed value (default 0) 00:05:35.626 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:35.626 [-f for fill workload, use this BYTE value (default 255) 00:05:35.626 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:35.626 [-y verify result if this switch is on] 00:05:35.626 [-a tasks to allocate per core (default: same value as -q)] 00:05:35.626 Can be used to spread operations across a wider range of memory. 00:05:35.626 [2024-12-14 06:35:49.410661] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:35.626 06:35:49 -- common/autotest_common.sh@653 -- # es=1 00:05:35.626 06:35:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:35.626 06:35:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:35.626 ************************************ 00:05:35.626 END TEST accel_negative_buffers 00:05:35.626 ************************************ 00:05:35.626 06:35:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:35.626 00:05:35.626 real 0m0.028s 00:05:35.626 user 0m0.014s 00:05:35.626 sys 0m0.013s 00:05:35.626 06:35:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:35.626 06:35:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.626 06:35:49 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:35.626 06:35:49 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:35.626 06:35:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.626 06:35:49 -- common/autotest_common.sh@10 -- # set +x 00:05:35.626 ************************************ 00:05:35.626 START TEST accel_crc32c 00:05:35.626 ************************************ 00:05:35.626 06:35:49 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:35.626 06:35:49 -- accel/accel.sh@16 -- # local accel_opc 00:05:35.626 06:35:49 -- accel/accel.sh@17 -- # local accel_module 00:05:35.626 06:35:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:35.626 06:35:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:35.626 06:35:49 -- accel/accel.sh@12 -- # build_accel_config 00:05:35.626 06:35:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:35.626 06:35:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.626 06:35:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.626 06:35:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:35.626 06:35:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:35.626 06:35:49 -- accel/accel.sh@41 -- # local IFS=, 00:05:35.626 06:35:49 -- accel/accel.sh@42 -- # jq -r . 00:05:35.626 [2024-12-14 06:35:49.487193] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:35.626 [2024-12-14 06:35:49.487649] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56147 ] 00:05:35.885 [2024-12-14 06:35:49.619035] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.885 [2024-12-14 06:35:49.668659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.264 06:35:50 -- accel/accel.sh@18 -- # out=' 00:05:37.264 SPDK Configuration: 00:05:37.264 Core mask: 0x1 00:05:37.264 00:05:37.264 Accel Perf Configuration: 00:05:37.264 Workload Type: crc32c 00:05:37.264 CRC-32C seed: 32 00:05:37.264 Transfer size: 4096 bytes 00:05:37.264 Vector count 1 00:05:37.265 Module: software 00:05:37.265 Queue depth: 32 00:05:37.265 Allocate depth: 32 00:05:37.265 # threads/core: 1 00:05:37.265 Run time: 1 seconds 00:05:37.265 Verify: Yes 00:05:37.265 00:05:37.265 Running for 1 seconds... 00:05:37.265 00:05:37.265 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:37.265 ------------------------------------------------------------------------------------ 00:05:37.265 0,0 525184/s 2051 MiB/s 0 0 00:05:37.265 ==================================================================================== 00:05:37.265 Total 525184/s 2051 MiB/s 0 0' 00:05:37.265 06:35:50 -- accel/accel.sh@20 -- # IFS=: 00:05:37.265 06:35:50 -- accel/accel.sh@20 -- # read -r var val 00:05:37.265 06:35:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:37.265 06:35:50 -- accel/accel.sh@12 -- # build_accel_config 00:05:37.265 06:35:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:37.265 06:35:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:37.265 06:35:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.265 06:35:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.265 06:35:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:37.265 06:35:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:37.265 06:35:50 -- accel/accel.sh@41 -- # local IFS=, 00:05:37.265 06:35:50 -- accel/accel.sh@42 -- # jq -r . 00:05:37.265 [2024-12-14 06:35:50.847660] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:37.265 [2024-12-14 06:35:50.847750] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56172 ] 00:05:37.265 [2024-12-14 06:35:50.979713] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.265 [2024-12-14 06:35:51.028507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.265 06:35:51 -- accel/accel.sh@21 -- # val= 00:05:37.265 06:35:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # IFS=: 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # read -r var val 00:05:37.265 06:35:51 -- accel/accel.sh@21 -- # val= 00:05:37.265 06:35:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # IFS=: 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # read -r var val 00:05:37.265 06:35:51 -- accel/accel.sh@21 -- # val=0x1 00:05:37.265 06:35:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # IFS=: 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # read -r var val 00:05:37.265 06:35:51 -- accel/accel.sh@21 -- # val= 00:05:37.265 06:35:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # IFS=: 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # read -r var val 00:05:37.265 06:35:51 -- accel/accel.sh@21 -- # val= 00:05:37.265 06:35:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # IFS=: 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # read -r var val 00:05:37.265 06:35:51 -- accel/accel.sh@21 -- # val=crc32c 00:05:37.265 06:35:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.265 06:35:51 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # IFS=: 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # read -r var val 00:05:37.265 06:35:51 -- accel/accel.sh@21 -- # val=32 00:05:37.265 06:35:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # IFS=: 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # read -r var val 00:05:37.265 06:35:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:37.265 06:35:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # IFS=: 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # read -r var val 00:05:37.265 06:35:51 -- accel/accel.sh@21 -- # val= 00:05:37.265 06:35:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # IFS=: 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # read -r var val 00:05:37.265 06:35:51 -- accel/accel.sh@21 -- # val=software 00:05:37.265 06:35:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.265 06:35:51 -- accel/accel.sh@23 -- # accel_module=software 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # IFS=: 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # read -r var val 00:05:37.265 06:35:51 -- accel/accel.sh@21 -- # val=32 00:05:37.265 06:35:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # IFS=: 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # read -r var val 00:05:37.265 06:35:51 -- accel/accel.sh@21 -- # val=32 00:05:37.265 06:35:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # IFS=: 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # read -r var val 00:05:37.265 06:35:51 -- accel/accel.sh@21 -- # val=1 00:05:37.265 06:35:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # IFS=: 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # read -r var val 00:05:37.265 06:35:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:37.265 06:35:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # IFS=: 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # read -r var val 00:05:37.265 06:35:51 -- accel/accel.sh@21 -- # val=Yes 00:05:37.265 06:35:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # IFS=: 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # read -r var val 00:05:37.265 06:35:51 -- accel/accel.sh@21 -- # val= 00:05:37.265 06:35:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # IFS=: 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # read -r var val 00:05:37.265 06:35:51 -- accel/accel.sh@21 -- # val= 00:05:37.265 06:35:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # IFS=: 00:05:37.265 06:35:51 -- accel/accel.sh@20 -- # read -r var val 00:05:38.202 06:35:52 -- accel/accel.sh@21 -- # val= 00:05:38.202 06:35:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.202 06:35:52 -- accel/accel.sh@20 -- # IFS=: 00:05:38.202 06:35:52 -- accel/accel.sh@20 -- # read -r var val 00:05:38.202 06:35:52 -- accel/accel.sh@21 -- # val= 00:05:38.202 06:35:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.202 06:35:52 -- accel/accel.sh@20 -- # IFS=: 00:05:38.202 06:35:52 -- accel/accel.sh@20 -- # read -r var val 00:05:38.202 06:35:52 -- accel/accel.sh@21 -- # val= 00:05:38.202 06:35:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.202 06:35:52 -- accel/accel.sh@20 -- # IFS=: 00:05:38.202 06:35:52 -- accel/accel.sh@20 -- # read -r var val 00:05:38.202 06:35:52 -- accel/accel.sh@21 -- # val= 00:05:38.202 06:35:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.202 06:35:52 -- accel/accel.sh@20 -- # IFS=: 00:05:38.202 06:35:52 -- accel/accel.sh@20 -- # read -r var val 00:05:38.202 06:35:52 -- accel/accel.sh@21 -- # val= 00:05:38.202 06:35:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.202 06:35:52 -- accel/accel.sh@20 -- # IFS=: 00:05:38.202 06:35:52 -- accel/accel.sh@20 -- # read -r var val 00:05:38.202 06:35:52 -- accel/accel.sh@21 -- # val= 00:05:38.202 06:35:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.202 06:35:52 -- accel/accel.sh@20 -- # IFS=: 00:05:38.202 06:35:52 -- accel/accel.sh@20 -- # read -r var val 00:05:38.202 06:35:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:38.202 06:35:52 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:38.202 06:35:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.202 00:05:38.202 real 0m2.723s 00:05:38.202 user 0m2.378s 00:05:38.202 sys 0m0.145s 00:05:38.202 06:35:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:38.202 ************************************ 00:05:38.202 END TEST accel_crc32c 00:05:38.202 ************************************ 00:05:38.202 06:35:52 -- common/autotest_common.sh@10 -- # set +x 00:05:38.462 06:35:52 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:38.462 06:35:52 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:38.462 06:35:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.462 06:35:52 -- common/autotest_common.sh@10 -- # set +x 00:05:38.462 ************************************ 00:05:38.462 START TEST accel_crc32c_C2 00:05:38.462 ************************************ 00:05:38.462 06:35:52 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:38.462 06:35:52 -- accel/accel.sh@16 -- # local accel_opc 00:05:38.462 06:35:52 -- accel/accel.sh@17 -- # local accel_module 00:05:38.462 06:35:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:38.462 06:35:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:38.462 06:35:52 -- accel/accel.sh@12 -- # build_accel_config 00:05:38.462 06:35:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:38.462 06:35:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.462 06:35:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.462 06:35:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:38.462 06:35:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:38.462 06:35:52 -- accel/accel.sh@41 -- # local IFS=, 00:05:38.462 06:35:52 -- accel/accel.sh@42 -- # jq -r . 00:05:38.462 [2024-12-14 06:35:52.264183] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:38.462 [2024-12-14 06:35:52.264457] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56201 ] 00:05:38.462 [2024-12-14 06:35:52.392151] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.462 [2024-12-14 06:35:52.441119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.840 06:35:53 -- accel/accel.sh@18 -- # out=' 00:05:39.840 SPDK Configuration: 00:05:39.840 Core mask: 0x1 00:05:39.840 00:05:39.840 Accel Perf Configuration: 00:05:39.840 Workload Type: crc32c 00:05:39.840 CRC-32C seed: 0 00:05:39.840 Transfer size: 4096 bytes 00:05:39.840 Vector count 2 00:05:39.840 Module: software 00:05:39.840 Queue depth: 32 00:05:39.840 Allocate depth: 32 00:05:39.840 # threads/core: 1 00:05:39.840 Run time: 1 seconds 00:05:39.840 Verify: Yes 00:05:39.840 00:05:39.840 Running for 1 seconds... 00:05:39.840 00:05:39.840 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:39.840 ------------------------------------------------------------------------------------ 00:05:39.840 0,0 403232/s 3150 MiB/s 0 0 00:05:39.840 ==================================================================================== 00:05:39.840 Total 403232/s 1575 MiB/s 0 0' 00:05:39.840 06:35:53 -- accel/accel.sh@20 -- # IFS=: 00:05:39.840 06:35:53 -- accel/accel.sh@20 -- # read -r var val 00:05:39.840 06:35:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:39.840 06:35:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:39.840 06:35:53 -- accel/accel.sh@12 -- # build_accel_config 00:05:39.840 06:35:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:39.840 06:35:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.840 06:35:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.840 06:35:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:39.840 06:35:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:39.840 06:35:53 -- accel/accel.sh@41 -- # local IFS=, 00:05:39.840 06:35:53 -- accel/accel.sh@42 -- # jq -r . 00:05:39.840 [2024-12-14 06:35:53.622522] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:39.840 [2024-12-14 06:35:53.623043] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56215 ] 00:05:39.840 [2024-12-14 06:35:53.758206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.840 [2024-12-14 06:35:53.808066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.100 06:35:53 -- accel/accel.sh@21 -- # val= 00:05:40.100 06:35:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # IFS=: 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # read -r var val 00:05:40.100 06:35:53 -- accel/accel.sh@21 -- # val= 00:05:40.100 06:35:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # IFS=: 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # read -r var val 00:05:40.100 06:35:53 -- accel/accel.sh@21 -- # val=0x1 00:05:40.100 06:35:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # IFS=: 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # read -r var val 00:05:40.100 06:35:53 -- accel/accel.sh@21 -- # val= 00:05:40.100 06:35:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # IFS=: 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # read -r var val 00:05:40.100 06:35:53 -- accel/accel.sh@21 -- # val= 00:05:40.100 06:35:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # IFS=: 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # read -r var val 00:05:40.100 06:35:53 -- accel/accel.sh@21 -- # val=crc32c 00:05:40.100 06:35:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.100 06:35:53 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # IFS=: 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # read -r var val 00:05:40.100 06:35:53 -- accel/accel.sh@21 -- # val=0 00:05:40.100 06:35:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # IFS=: 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # read -r var val 00:05:40.100 06:35:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:40.100 06:35:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # IFS=: 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # read -r var val 00:05:40.100 06:35:53 -- accel/accel.sh@21 -- # val= 00:05:40.100 06:35:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # IFS=: 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # read -r var val 00:05:40.100 06:35:53 -- accel/accel.sh@21 -- # val=software 00:05:40.100 06:35:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.100 06:35:53 -- accel/accel.sh@23 -- # accel_module=software 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # IFS=: 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # read -r var val 00:05:40.100 06:35:53 -- accel/accel.sh@21 -- # val=32 00:05:40.100 06:35:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # IFS=: 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # read -r var val 00:05:40.100 06:35:53 -- accel/accel.sh@21 -- # val=32 00:05:40.100 06:35:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # IFS=: 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # read -r var val 00:05:40.100 06:35:53 -- accel/accel.sh@21 -- # val=1 00:05:40.100 06:35:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # IFS=: 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # read -r var val 00:05:40.100 06:35:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:40.100 06:35:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # IFS=: 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # read -r var val 00:05:40.100 06:35:53 -- accel/accel.sh@21 -- # val=Yes 00:05:40.100 06:35:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # IFS=: 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # read -r var val 00:05:40.100 06:35:53 -- accel/accel.sh@21 -- # val= 00:05:40.100 06:35:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # IFS=: 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # read -r var val 00:05:40.100 06:35:53 -- accel/accel.sh@21 -- # val= 00:05:40.100 06:35:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # IFS=: 00:05:40.100 06:35:53 -- accel/accel.sh@20 -- # read -r var val 00:05:41.071 06:35:54 -- accel/accel.sh@21 -- # val= 00:05:41.071 06:35:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.071 06:35:54 -- accel/accel.sh@20 -- # IFS=: 00:05:41.071 06:35:54 -- accel/accel.sh@20 -- # read -r var val 00:05:41.071 06:35:54 -- accel/accel.sh@21 -- # val= 00:05:41.071 06:35:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.071 06:35:54 -- accel/accel.sh@20 -- # IFS=: 00:05:41.071 06:35:54 -- accel/accel.sh@20 -- # read -r var val 00:05:41.071 06:35:54 -- accel/accel.sh@21 -- # val= 00:05:41.071 06:35:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.071 06:35:54 -- accel/accel.sh@20 -- # IFS=: 00:05:41.071 06:35:54 -- accel/accel.sh@20 -- # read -r var val 00:05:41.071 06:35:54 -- accel/accel.sh@21 -- # val= 00:05:41.071 06:35:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.071 06:35:54 -- accel/accel.sh@20 -- # IFS=: 00:05:41.071 06:35:54 -- accel/accel.sh@20 -- # read -r var val 00:05:41.071 06:35:54 -- accel/accel.sh@21 -- # val= 00:05:41.071 06:35:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.071 06:35:54 -- accel/accel.sh@20 -- # IFS=: 00:05:41.071 06:35:54 -- accel/accel.sh@20 -- # read -r var val 00:05:41.071 06:35:54 -- accel/accel.sh@21 -- # val= 00:05:41.071 ************************************ 00:05:41.071 END TEST accel_crc32c_C2 00:05:41.071 ************************************ 00:05:41.071 06:35:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.071 06:35:54 -- accel/accel.sh@20 -- # IFS=: 00:05:41.071 06:35:54 -- accel/accel.sh@20 -- # read -r var val 00:05:41.071 06:35:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:41.071 06:35:54 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:41.071 06:35:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:41.071 00:05:41.071 real 0m2.730s 00:05:41.071 user 0m2.390s 00:05:41.071 sys 0m0.140s 00:05:41.071 06:35:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:41.071 06:35:54 -- common/autotest_common.sh@10 -- # set +x 00:05:41.071 06:35:55 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:41.071 06:35:55 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:41.071 06:35:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.071 06:35:55 -- common/autotest_common.sh@10 -- # set +x 00:05:41.071 ************************************ 00:05:41.071 START TEST accel_copy 00:05:41.071 ************************************ 00:05:41.071 06:35:55 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:05:41.071 06:35:55 -- accel/accel.sh@16 -- # local accel_opc 00:05:41.071 06:35:55 -- accel/accel.sh@17 -- # local accel_module 00:05:41.071 06:35:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:05:41.071 06:35:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:41.071 06:35:55 -- accel/accel.sh@12 -- # build_accel_config 00:05:41.071 06:35:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:41.071 06:35:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.071 06:35:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.071 06:35:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:41.071 06:35:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:41.071 06:35:55 -- accel/accel.sh@41 -- # local IFS=, 00:05:41.071 06:35:55 -- accel/accel.sh@42 -- # jq -r . 00:05:41.071 [2024-12-14 06:35:55.045928] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:41.071 [2024-12-14 06:35:55.046018] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56255 ] 00:05:41.329 [2024-12-14 06:35:55.179555] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.329 [2024-12-14 06:35:55.229330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.706 06:35:56 -- accel/accel.sh@18 -- # out=' 00:05:42.706 SPDK Configuration: 00:05:42.706 Core mask: 0x1 00:05:42.706 00:05:42.706 Accel Perf Configuration: 00:05:42.706 Workload Type: copy 00:05:42.706 Transfer size: 4096 bytes 00:05:42.706 Vector count 1 00:05:42.706 Module: software 00:05:42.706 Queue depth: 32 00:05:42.707 Allocate depth: 32 00:05:42.707 # threads/core: 1 00:05:42.707 Run time: 1 seconds 00:05:42.707 Verify: Yes 00:05:42.707 00:05:42.707 Running for 1 seconds... 00:05:42.707 00:05:42.707 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:42.707 ------------------------------------------------------------------------------------ 00:05:42.707 0,0 358048/s 1398 MiB/s 0 0 00:05:42.707 ==================================================================================== 00:05:42.707 Total 358048/s 1398 MiB/s 0 0' 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # IFS=: 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # read -r var val 00:05:42.707 06:35:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:42.707 06:35:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:42.707 06:35:56 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.707 06:35:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:42.707 06:35:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.707 06:35:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.707 06:35:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:42.707 06:35:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:42.707 06:35:56 -- accel/accel.sh@41 -- # local IFS=, 00:05:42.707 06:35:56 -- accel/accel.sh@42 -- # jq -r . 00:05:42.707 [2024-12-14 06:35:56.411115] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:42.707 [2024-12-14 06:35:56.411240] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56269 ] 00:05:42.707 [2024-12-14 06:35:56.547573] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.707 [2024-12-14 06:35:56.597122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.707 06:35:56 -- accel/accel.sh@21 -- # val= 00:05:42.707 06:35:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # IFS=: 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # read -r var val 00:05:42.707 06:35:56 -- accel/accel.sh@21 -- # val= 00:05:42.707 06:35:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # IFS=: 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # read -r var val 00:05:42.707 06:35:56 -- accel/accel.sh@21 -- # val=0x1 00:05:42.707 06:35:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # IFS=: 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # read -r var val 00:05:42.707 06:35:56 -- accel/accel.sh@21 -- # val= 00:05:42.707 06:35:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # IFS=: 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # read -r var val 00:05:42.707 06:35:56 -- accel/accel.sh@21 -- # val= 00:05:42.707 06:35:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # IFS=: 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # read -r var val 00:05:42.707 06:35:56 -- accel/accel.sh@21 -- # val=copy 00:05:42.707 06:35:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.707 06:35:56 -- accel/accel.sh@24 -- # accel_opc=copy 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # IFS=: 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # read -r var val 00:05:42.707 06:35:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:42.707 06:35:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # IFS=: 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # read -r var val 00:05:42.707 06:35:56 -- accel/accel.sh@21 -- # val= 00:05:42.707 06:35:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # IFS=: 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # read -r var val 00:05:42.707 06:35:56 -- accel/accel.sh@21 -- # val=software 00:05:42.707 06:35:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.707 06:35:56 -- accel/accel.sh@23 -- # accel_module=software 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # IFS=: 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # read -r var val 00:05:42.707 06:35:56 -- accel/accel.sh@21 -- # val=32 00:05:42.707 06:35:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # IFS=: 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # read -r var val 00:05:42.707 06:35:56 -- accel/accel.sh@21 -- # val=32 00:05:42.707 06:35:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # IFS=: 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # read -r var val 00:05:42.707 06:35:56 -- accel/accel.sh@21 -- # val=1 00:05:42.707 06:35:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # IFS=: 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # read -r var val 00:05:42.707 06:35:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:42.707 06:35:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # IFS=: 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # read -r var val 00:05:42.707 06:35:56 -- accel/accel.sh@21 -- # val=Yes 00:05:42.707 06:35:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # IFS=: 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # read -r var val 00:05:42.707 06:35:56 -- accel/accel.sh@21 -- # val= 00:05:42.707 06:35:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # IFS=: 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # read -r var val 00:05:42.707 06:35:56 -- accel/accel.sh@21 -- # val= 00:05:42.707 06:35:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # IFS=: 00:05:42.707 06:35:56 -- accel/accel.sh@20 -- # read -r var val 00:05:44.085 06:35:57 -- accel/accel.sh@21 -- # val= 00:05:44.085 06:35:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.085 06:35:57 -- accel/accel.sh@20 -- # IFS=: 00:05:44.085 06:35:57 -- accel/accel.sh@20 -- # read -r var val 00:05:44.085 06:35:57 -- accel/accel.sh@21 -- # val= 00:05:44.085 06:35:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.085 06:35:57 -- accel/accel.sh@20 -- # IFS=: 00:05:44.085 06:35:57 -- accel/accel.sh@20 -- # read -r var val 00:05:44.085 06:35:57 -- accel/accel.sh@21 -- # val= 00:05:44.085 06:35:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.085 06:35:57 -- accel/accel.sh@20 -- # IFS=: 00:05:44.085 06:35:57 -- accel/accel.sh@20 -- # read -r var val 00:05:44.085 06:35:57 -- accel/accel.sh@21 -- # val= 00:05:44.085 06:35:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.085 06:35:57 -- accel/accel.sh@20 -- # IFS=: 00:05:44.085 06:35:57 -- accel/accel.sh@20 -- # read -r var val 00:05:44.085 06:35:57 -- accel/accel.sh@21 -- # val= 00:05:44.085 06:35:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.085 06:35:57 -- accel/accel.sh@20 -- # IFS=: 00:05:44.085 06:35:57 -- accel/accel.sh@20 -- # read -r var val 00:05:44.085 06:35:57 -- accel/accel.sh@21 -- # val= 00:05:44.085 06:35:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.085 06:35:57 -- accel/accel.sh@20 -- # IFS=: 00:05:44.085 06:35:57 -- accel/accel.sh@20 -- # read -r var val 00:05:44.085 06:35:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:44.085 06:35:57 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:05:44.085 06:35:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.085 00:05:44.085 real 0m2.743s 00:05:44.085 user 0m2.408s 00:05:44.085 sys 0m0.136s 00:05:44.085 06:35:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.085 ************************************ 00:05:44.085 END TEST accel_copy 00:05:44.085 ************************************ 00:05:44.085 06:35:57 -- common/autotest_common.sh@10 -- # set +x 00:05:44.085 06:35:57 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:44.085 06:35:57 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:44.085 06:35:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.085 06:35:57 -- common/autotest_common.sh@10 -- # set +x 00:05:44.085 ************************************ 00:05:44.085 START TEST accel_fill 00:05:44.085 ************************************ 00:05:44.085 06:35:57 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:44.085 06:35:57 -- accel/accel.sh@16 -- # local accel_opc 00:05:44.085 06:35:57 -- accel/accel.sh@17 -- # local accel_module 00:05:44.085 06:35:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:44.085 06:35:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:44.085 06:35:57 -- accel/accel.sh@12 -- # build_accel_config 00:05:44.085 06:35:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:44.085 06:35:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.085 06:35:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.085 06:35:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:44.085 06:35:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:44.085 06:35:57 -- accel/accel.sh@41 -- # local IFS=, 00:05:44.085 06:35:57 -- accel/accel.sh@42 -- # jq -r . 00:05:44.085 [2024-12-14 06:35:57.836288] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:44.085 [2024-12-14 06:35:57.836513] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56304 ] 00:05:44.085 [2024-12-14 06:35:57.972299] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.085 [2024-12-14 06:35:58.021793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.463 06:35:59 -- accel/accel.sh@18 -- # out=' 00:05:45.463 SPDK Configuration: 00:05:45.463 Core mask: 0x1 00:05:45.463 00:05:45.463 Accel Perf Configuration: 00:05:45.463 Workload Type: fill 00:05:45.463 Fill pattern: 0x80 00:05:45.463 Transfer size: 4096 bytes 00:05:45.463 Vector count 1 00:05:45.463 Module: software 00:05:45.463 Queue depth: 64 00:05:45.463 Allocate depth: 64 00:05:45.463 # threads/core: 1 00:05:45.463 Run time: 1 seconds 00:05:45.463 Verify: Yes 00:05:45.463 00:05:45.463 Running for 1 seconds... 00:05:45.463 00:05:45.463 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:45.463 ------------------------------------------------------------------------------------ 00:05:45.463 0,0 519488/s 2029 MiB/s 0 0 00:05:45.463 ==================================================================================== 00:05:45.463 Total 519488/s 2029 MiB/s 0 0' 00:05:45.463 06:35:59 -- accel/accel.sh@20 -- # IFS=: 00:05:45.463 06:35:59 -- accel/accel.sh@20 -- # read -r var val 00:05:45.463 06:35:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:45.463 06:35:59 -- accel/accel.sh@12 -- # build_accel_config 00:05:45.463 06:35:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:45.463 06:35:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:45.463 06:35:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.463 06:35:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.463 06:35:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:45.463 06:35:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:45.463 06:35:59 -- accel/accel.sh@41 -- # local IFS=, 00:05:45.463 06:35:59 -- accel/accel.sh@42 -- # jq -r . 00:05:45.463 [2024-12-14 06:35:59.202551] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:45.463 [2024-12-14 06:35:59.202640] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56323 ] 00:05:45.463 [2024-12-14 06:35:59.336811] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.463 [2024-12-14 06:35:59.385556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.463 06:35:59 -- accel/accel.sh@21 -- # val= 00:05:45.463 06:35:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.463 06:35:59 -- accel/accel.sh@20 -- # IFS=: 00:05:45.463 06:35:59 -- accel/accel.sh@20 -- # read -r var val 00:05:45.463 06:35:59 -- accel/accel.sh@21 -- # val= 00:05:45.463 06:35:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.463 06:35:59 -- accel/accel.sh@20 -- # IFS=: 00:05:45.463 06:35:59 -- accel/accel.sh@20 -- # read -r var val 00:05:45.463 06:35:59 -- accel/accel.sh@21 -- # val=0x1 00:05:45.463 06:35:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.463 06:35:59 -- accel/accel.sh@20 -- # IFS=: 00:05:45.463 06:35:59 -- accel/accel.sh@20 -- # read -r var val 00:05:45.463 06:35:59 -- accel/accel.sh@21 -- # val= 00:05:45.463 06:35:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.463 06:35:59 -- accel/accel.sh@20 -- # IFS=: 00:05:45.463 06:35:59 -- accel/accel.sh@20 -- # read -r var val 00:05:45.463 06:35:59 -- accel/accel.sh@21 -- # val= 00:05:45.463 06:35:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.463 06:35:59 -- accel/accel.sh@20 -- # IFS=: 00:05:45.463 06:35:59 -- accel/accel.sh@20 -- # read -r var val 00:05:45.463 06:35:59 -- accel/accel.sh@21 -- # val=fill 00:05:45.463 06:35:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.463 06:35:59 -- accel/accel.sh@24 -- # accel_opc=fill 00:05:45.463 06:35:59 -- accel/accel.sh@20 -- # IFS=: 00:05:45.463 06:35:59 -- accel/accel.sh@20 -- # read -r var val 00:05:45.463 06:35:59 -- accel/accel.sh@21 -- # val=0x80 00:05:45.463 06:35:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.463 06:35:59 -- accel/accel.sh@20 -- # IFS=: 00:05:45.463 06:35:59 -- accel/accel.sh@20 -- # read -r var val 00:05:45.463 06:35:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:45.463 06:35:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.463 06:35:59 -- accel/accel.sh@20 -- # IFS=: 00:05:45.463 06:35:59 -- accel/accel.sh@20 -- # read -r var val 00:05:45.464 06:35:59 -- accel/accel.sh@21 -- # val= 00:05:45.464 06:35:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.464 06:35:59 -- accel/accel.sh@20 -- # IFS=: 00:05:45.464 06:35:59 -- accel/accel.sh@20 -- # read -r var val 00:05:45.464 06:35:59 -- accel/accel.sh@21 -- # val=software 00:05:45.464 06:35:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.464 06:35:59 -- accel/accel.sh@23 -- # accel_module=software 00:05:45.464 06:35:59 -- accel/accel.sh@20 -- # IFS=: 00:05:45.464 06:35:59 -- accel/accel.sh@20 -- # read -r var val 00:05:45.464 06:35:59 -- accel/accel.sh@21 -- # val=64 00:05:45.464 06:35:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.464 06:35:59 -- accel/accel.sh@20 -- # IFS=: 00:05:45.464 06:35:59 -- accel/accel.sh@20 -- # read -r var val 00:05:45.464 06:35:59 -- accel/accel.sh@21 -- # val=64 00:05:45.464 06:35:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.464 06:35:59 -- accel/accel.sh@20 -- # IFS=: 00:05:45.464 06:35:59 -- accel/accel.sh@20 -- # read -r var val 00:05:45.464 06:35:59 -- accel/accel.sh@21 -- # val=1 00:05:45.464 06:35:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.464 06:35:59 -- accel/accel.sh@20 -- # IFS=: 00:05:45.464 06:35:59 -- accel/accel.sh@20 -- # read -r var val 00:05:45.464 06:35:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:45.464 06:35:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.464 06:35:59 -- accel/accel.sh@20 -- # IFS=: 00:05:45.464 06:35:59 -- accel/accel.sh@20 -- # read -r var val 00:05:45.464 06:35:59 -- accel/accel.sh@21 -- # val=Yes 00:05:45.464 06:35:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.464 06:35:59 -- accel/accel.sh@20 -- # IFS=: 00:05:45.464 06:35:59 -- accel/accel.sh@20 -- # read -r var val 00:05:45.464 06:35:59 -- accel/accel.sh@21 -- # val= 00:05:45.464 06:35:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.464 06:35:59 -- accel/accel.sh@20 -- # IFS=: 00:05:45.464 06:35:59 -- accel/accel.sh@20 -- # read -r var val 00:05:45.464 06:35:59 -- accel/accel.sh@21 -- # val= 00:05:45.464 06:35:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.464 06:35:59 -- accel/accel.sh@20 -- # IFS=: 00:05:45.464 06:35:59 -- accel/accel.sh@20 -- # read -r var val 00:05:46.844 06:36:00 -- accel/accel.sh@21 -- # val= 00:05:46.844 06:36:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.844 06:36:00 -- accel/accel.sh@20 -- # IFS=: 00:05:46.844 06:36:00 -- accel/accel.sh@20 -- # read -r var val 00:05:46.844 06:36:00 -- accel/accel.sh@21 -- # val= 00:05:46.844 06:36:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.844 06:36:00 -- accel/accel.sh@20 -- # IFS=: 00:05:46.844 06:36:00 -- accel/accel.sh@20 -- # read -r var val 00:05:46.844 06:36:00 -- accel/accel.sh@21 -- # val= 00:05:46.844 06:36:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.844 06:36:00 -- accel/accel.sh@20 -- # IFS=: 00:05:46.844 06:36:00 -- accel/accel.sh@20 -- # read -r var val 00:05:46.844 06:36:00 -- accel/accel.sh@21 -- # val= 00:05:46.844 06:36:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.844 06:36:00 -- accel/accel.sh@20 -- # IFS=: 00:05:46.844 06:36:00 -- accel/accel.sh@20 -- # read -r var val 00:05:46.844 06:36:00 -- accel/accel.sh@21 -- # val= 00:05:46.844 06:36:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.844 06:36:00 -- accel/accel.sh@20 -- # IFS=: 00:05:46.844 06:36:00 -- accel/accel.sh@20 -- # read -r var val 00:05:46.844 06:36:00 -- accel/accel.sh@21 -- # val= 00:05:46.844 06:36:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.844 06:36:00 -- accel/accel.sh@20 -- # IFS=: 00:05:46.844 06:36:00 -- accel/accel.sh@20 -- # read -r var val 00:05:46.844 06:36:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:46.844 06:36:00 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:05:46.844 06:36:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.844 00:05:46.844 real 0m2.740s 00:05:46.844 user 0m2.404s 00:05:46.844 sys 0m0.136s 00:05:46.844 06:36:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:46.844 06:36:00 -- common/autotest_common.sh@10 -- # set +x 00:05:46.844 ************************************ 00:05:46.844 END TEST accel_fill 00:05:46.844 ************************************ 00:05:46.844 06:36:00 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:46.844 06:36:00 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:46.844 06:36:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.844 06:36:00 -- common/autotest_common.sh@10 -- # set +x 00:05:46.844 ************************************ 00:05:46.844 START TEST accel_copy_crc32c 00:05:46.844 ************************************ 00:05:46.844 06:36:00 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:05:46.844 06:36:00 -- accel/accel.sh@16 -- # local accel_opc 00:05:46.844 06:36:00 -- accel/accel.sh@17 -- # local accel_module 00:05:46.844 06:36:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:46.844 06:36:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:46.844 06:36:00 -- accel/accel.sh@12 -- # build_accel_config 00:05:46.844 06:36:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:46.844 06:36:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.844 06:36:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.844 06:36:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:46.844 06:36:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:46.844 06:36:00 -- accel/accel.sh@41 -- # local IFS=, 00:05:46.844 06:36:00 -- accel/accel.sh@42 -- # jq -r . 00:05:46.844 [2024-12-14 06:36:00.630228] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:46.844 [2024-12-14 06:36:00.630352] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56352 ] 00:05:46.844 [2024-12-14 06:36:00.766758] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.844 [2024-12-14 06:36:00.822333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.218 06:36:01 -- accel/accel.sh@18 -- # out=' 00:05:48.218 SPDK Configuration: 00:05:48.218 Core mask: 0x1 00:05:48.218 00:05:48.218 Accel Perf Configuration: 00:05:48.218 Workload Type: copy_crc32c 00:05:48.218 CRC-32C seed: 0 00:05:48.218 Vector size: 4096 bytes 00:05:48.218 Transfer size: 4096 bytes 00:05:48.218 Vector count 1 00:05:48.218 Module: software 00:05:48.218 Queue depth: 32 00:05:48.218 Allocate depth: 32 00:05:48.218 # threads/core: 1 00:05:48.218 Run time: 1 seconds 00:05:48.218 Verify: Yes 00:05:48.218 00:05:48.218 Running for 1 seconds... 00:05:48.218 00:05:48.218 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:48.218 ------------------------------------------------------------------------------------ 00:05:48.218 0,0 273856/s 1069 MiB/s 0 0 00:05:48.218 ==================================================================================== 00:05:48.218 Total 273856/s 1069 MiB/s 0 0' 00:05:48.218 06:36:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:48.218 06:36:01 -- accel/accel.sh@20 -- # IFS=: 00:05:48.218 06:36:01 -- accel/accel.sh@20 -- # read -r var val 00:05:48.218 06:36:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:48.218 06:36:01 -- accel/accel.sh@12 -- # build_accel_config 00:05:48.218 06:36:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:48.218 06:36:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.218 06:36:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.218 06:36:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:48.218 06:36:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:48.218 06:36:01 -- accel/accel.sh@41 -- # local IFS=, 00:05:48.218 06:36:01 -- accel/accel.sh@42 -- # jq -r . 00:05:48.218 [2024-12-14 06:36:01.997178] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:48.218 [2024-12-14 06:36:01.997865] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56372 ] 00:05:48.218 [2024-12-14 06:36:02.128397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.218 [2024-12-14 06:36:02.184291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.477 06:36:02 -- accel/accel.sh@21 -- # val= 00:05:48.477 06:36:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # IFS=: 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # read -r var val 00:05:48.477 06:36:02 -- accel/accel.sh@21 -- # val= 00:05:48.477 06:36:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # IFS=: 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # read -r var val 00:05:48.477 06:36:02 -- accel/accel.sh@21 -- # val=0x1 00:05:48.477 06:36:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # IFS=: 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # read -r var val 00:05:48.477 06:36:02 -- accel/accel.sh@21 -- # val= 00:05:48.477 06:36:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # IFS=: 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # read -r var val 00:05:48.477 06:36:02 -- accel/accel.sh@21 -- # val= 00:05:48.477 06:36:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # IFS=: 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # read -r var val 00:05:48.477 06:36:02 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:48.477 06:36:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.477 06:36:02 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # IFS=: 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # read -r var val 00:05:48.477 06:36:02 -- accel/accel.sh@21 -- # val=0 00:05:48.477 06:36:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # IFS=: 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # read -r var val 00:05:48.477 06:36:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:48.477 06:36:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # IFS=: 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # read -r var val 00:05:48.477 06:36:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:48.477 06:36:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # IFS=: 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # read -r var val 00:05:48.477 06:36:02 -- accel/accel.sh@21 -- # val= 00:05:48.477 06:36:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # IFS=: 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # read -r var val 00:05:48.477 06:36:02 -- accel/accel.sh@21 -- # val=software 00:05:48.477 06:36:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.477 06:36:02 -- accel/accel.sh@23 -- # accel_module=software 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # IFS=: 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # read -r var val 00:05:48.477 06:36:02 -- accel/accel.sh@21 -- # val=32 00:05:48.477 06:36:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # IFS=: 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # read -r var val 00:05:48.477 06:36:02 -- accel/accel.sh@21 -- # val=32 00:05:48.477 06:36:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # IFS=: 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # read -r var val 00:05:48.477 06:36:02 -- accel/accel.sh@21 -- # val=1 00:05:48.477 06:36:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # IFS=: 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # read -r var val 00:05:48.477 06:36:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:48.477 06:36:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # IFS=: 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # read -r var val 00:05:48.477 06:36:02 -- accel/accel.sh@21 -- # val=Yes 00:05:48.477 06:36:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # IFS=: 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # read -r var val 00:05:48.477 06:36:02 -- accel/accel.sh@21 -- # val= 00:05:48.477 06:36:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.477 06:36:02 -- accel/accel.sh@20 -- # IFS=: 00:05:48.478 06:36:02 -- accel/accel.sh@20 -- # read -r var val 00:05:48.478 06:36:02 -- accel/accel.sh@21 -- # val= 00:05:48.478 06:36:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.478 06:36:02 -- accel/accel.sh@20 -- # IFS=: 00:05:48.478 06:36:02 -- accel/accel.sh@20 -- # read -r var val 00:05:49.415 06:36:03 -- accel/accel.sh@21 -- # val= 00:05:49.415 06:36:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.415 06:36:03 -- accel/accel.sh@20 -- # IFS=: 00:05:49.415 06:36:03 -- accel/accel.sh@20 -- # read -r var val 00:05:49.415 06:36:03 -- accel/accel.sh@21 -- # val= 00:05:49.415 06:36:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.415 06:36:03 -- accel/accel.sh@20 -- # IFS=: 00:05:49.415 06:36:03 -- accel/accel.sh@20 -- # read -r var val 00:05:49.415 06:36:03 -- accel/accel.sh@21 -- # val= 00:05:49.415 06:36:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.415 06:36:03 -- accel/accel.sh@20 -- # IFS=: 00:05:49.415 06:36:03 -- accel/accel.sh@20 -- # read -r var val 00:05:49.415 06:36:03 -- accel/accel.sh@21 -- # val= 00:05:49.415 06:36:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.415 06:36:03 -- accel/accel.sh@20 -- # IFS=: 00:05:49.415 06:36:03 -- accel/accel.sh@20 -- # read -r var val 00:05:49.415 06:36:03 -- accel/accel.sh@21 -- # val= 00:05:49.415 06:36:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.415 06:36:03 -- accel/accel.sh@20 -- # IFS=: 00:05:49.416 06:36:03 -- accel/accel.sh@20 -- # read -r var val 00:05:49.416 06:36:03 -- accel/accel.sh@21 -- # val= 00:05:49.416 06:36:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.416 06:36:03 -- accel/accel.sh@20 -- # IFS=: 00:05:49.416 06:36:03 -- accel/accel.sh@20 -- # read -r var val 00:05:49.416 06:36:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:49.416 06:36:03 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:49.416 06:36:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.416 00:05:49.416 real 0m2.747s 00:05:49.416 user 0m2.412s 00:05:49.416 sys 0m0.137s 00:05:49.416 06:36:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:49.416 06:36:03 -- common/autotest_common.sh@10 -- # set +x 00:05:49.416 ************************************ 00:05:49.416 END TEST accel_copy_crc32c 00:05:49.416 ************************************ 00:05:49.416 06:36:03 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:49.416 06:36:03 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:49.416 06:36:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.416 06:36:03 -- common/autotest_common.sh@10 -- # set +x 00:05:49.675 ************************************ 00:05:49.675 START TEST accel_copy_crc32c_C2 00:05:49.675 ************************************ 00:05:49.675 06:36:03 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:49.675 06:36:03 -- accel/accel.sh@16 -- # local accel_opc 00:05:49.675 06:36:03 -- accel/accel.sh@17 -- # local accel_module 00:05:49.675 06:36:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:49.675 06:36:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:49.675 06:36:03 -- accel/accel.sh@12 -- # build_accel_config 00:05:49.675 06:36:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:49.675 06:36:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.675 06:36:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.675 06:36:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:49.675 06:36:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:49.675 06:36:03 -- accel/accel.sh@41 -- # local IFS=, 00:05:49.675 06:36:03 -- accel/accel.sh@42 -- # jq -r . 00:05:49.675 [2024-12-14 06:36:03.434082] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:49.675 [2024-12-14 06:36:03.434178] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56406 ] 00:05:49.675 [2024-12-14 06:36:03.569409] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.675 [2024-12-14 06:36:03.623854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.052 06:36:04 -- accel/accel.sh@18 -- # out=' 00:05:51.052 SPDK Configuration: 00:05:51.052 Core mask: 0x1 00:05:51.052 00:05:51.052 Accel Perf Configuration: 00:05:51.052 Workload Type: copy_crc32c 00:05:51.052 CRC-32C seed: 0 00:05:51.052 Vector size: 4096 bytes 00:05:51.052 Transfer size: 8192 bytes 00:05:51.052 Vector count 2 00:05:51.052 Module: software 00:05:51.052 Queue depth: 32 00:05:51.052 Allocate depth: 32 00:05:51.052 # threads/core: 1 00:05:51.052 Run time: 1 seconds 00:05:51.052 Verify: Yes 00:05:51.052 00:05:51.052 Running for 1 seconds... 00:05:51.052 00:05:51.052 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:51.052 ------------------------------------------------------------------------------------ 00:05:51.052 0,0 207296/s 1619 MiB/s 0 0 00:05:51.052 ==================================================================================== 00:05:51.052 Total 207296/s 809 MiB/s 0 0' 00:05:51.052 06:36:04 -- accel/accel.sh@20 -- # IFS=: 00:05:51.052 06:36:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:51.052 06:36:04 -- accel/accel.sh@20 -- # read -r var val 00:05:51.052 06:36:04 -- accel/accel.sh@12 -- # build_accel_config 00:05:51.052 06:36:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:51.052 06:36:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:51.052 06:36:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.052 06:36:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.052 06:36:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:51.052 06:36:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:51.052 06:36:04 -- accel/accel.sh@41 -- # local IFS=, 00:05:51.052 06:36:04 -- accel/accel.sh@42 -- # jq -r . 00:05:51.052 [2024-12-14 06:36:04.800685] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:51.052 [2024-12-14 06:36:04.800774] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56420 ] 00:05:51.052 [2024-12-14 06:36:04.937527] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.052 [2024-12-14 06:36:04.986224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.052 06:36:05 -- accel/accel.sh@21 -- # val= 00:05:51.052 06:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.052 06:36:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.052 06:36:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.052 06:36:05 -- accel/accel.sh@21 -- # val= 00:05:51.052 06:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.052 06:36:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.052 06:36:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.052 06:36:05 -- accel/accel.sh@21 -- # val=0x1 00:05:51.052 06:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.052 06:36:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.052 06:36:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.052 06:36:05 -- accel/accel.sh@21 -- # val= 00:05:51.052 06:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.052 06:36:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.052 06:36:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.052 06:36:05 -- accel/accel.sh@21 -- # val= 00:05:51.052 06:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.052 06:36:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.052 06:36:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.052 06:36:05 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:51.052 06:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.052 06:36:05 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:51.052 06:36:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.052 06:36:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.052 06:36:05 -- accel/accel.sh@21 -- # val=0 00:05:51.052 06:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.052 06:36:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.052 06:36:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.052 06:36:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:51.052 06:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.052 06:36:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.052 06:36:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.052 06:36:05 -- accel/accel.sh@21 -- # val='8192 bytes' 00:05:51.052 06:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.052 06:36:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.052 06:36:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.052 06:36:05 -- accel/accel.sh@21 -- # val= 00:05:51.052 06:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.052 06:36:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.052 06:36:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.052 06:36:05 -- accel/accel.sh@21 -- # val=software 00:05:51.052 06:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.052 06:36:05 -- accel/accel.sh@23 -- # accel_module=software 00:05:51.052 06:36:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.052 06:36:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.052 06:36:05 -- accel/accel.sh@21 -- # val=32 00:05:51.052 06:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.052 06:36:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.052 06:36:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.052 06:36:05 -- accel/accel.sh@21 -- # val=32 00:05:51.052 06:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.052 06:36:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.052 06:36:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.052 06:36:05 -- accel/accel.sh@21 -- # val=1 00:05:51.053 06:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.053 06:36:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.053 06:36:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.053 06:36:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:51.053 06:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.053 06:36:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.053 06:36:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.053 06:36:05 -- accel/accel.sh@21 -- # val=Yes 00:05:51.053 06:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.053 06:36:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.053 06:36:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.053 06:36:05 -- accel/accel.sh@21 -- # val= 00:05:51.053 06:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.053 06:36:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.053 06:36:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.053 06:36:05 -- accel/accel.sh@21 -- # val= 00:05:51.053 06:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.053 06:36:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.053 06:36:05 -- accel/accel.sh@20 -- # read -r var val 00:05:52.429 06:36:06 -- accel/accel.sh@21 -- # val= 00:05:52.429 06:36:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.429 06:36:06 -- accel/accel.sh@20 -- # IFS=: 00:05:52.429 06:36:06 -- accel/accel.sh@20 -- # read -r var val 00:05:52.429 06:36:06 -- accel/accel.sh@21 -- # val= 00:05:52.429 06:36:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.429 06:36:06 -- accel/accel.sh@20 -- # IFS=: 00:05:52.429 06:36:06 -- accel/accel.sh@20 -- # read -r var val 00:05:52.429 06:36:06 -- accel/accel.sh@21 -- # val= 00:05:52.429 06:36:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.429 06:36:06 -- accel/accel.sh@20 -- # IFS=: 00:05:52.429 06:36:06 -- accel/accel.sh@20 -- # read -r var val 00:05:52.429 06:36:06 -- accel/accel.sh@21 -- # val= 00:05:52.429 06:36:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.429 06:36:06 -- accel/accel.sh@20 -- # IFS=: 00:05:52.429 06:36:06 -- accel/accel.sh@20 -- # read -r var val 00:05:52.429 06:36:06 -- accel/accel.sh@21 -- # val= 00:05:52.429 06:36:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.429 06:36:06 -- accel/accel.sh@20 -- # IFS=: 00:05:52.429 06:36:06 -- accel/accel.sh@20 -- # read -r var val 00:05:52.429 06:36:06 -- accel/accel.sh@21 -- # val= 00:05:52.429 06:36:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.429 06:36:06 -- accel/accel.sh@20 -- # IFS=: 00:05:52.429 06:36:06 -- accel/accel.sh@20 -- # read -r var val 00:05:52.429 06:36:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:52.429 06:36:06 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:52.429 06:36:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.429 00:05:52.429 real 0m2.737s 00:05:52.429 user 0m2.396s 00:05:52.429 sys 0m0.143s 00:05:52.429 06:36:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:52.429 ************************************ 00:05:52.429 END TEST accel_copy_crc32c_C2 00:05:52.429 ************************************ 00:05:52.429 06:36:06 -- common/autotest_common.sh@10 -- # set +x 00:05:52.429 06:36:06 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:52.429 06:36:06 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:52.429 06:36:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.429 06:36:06 -- common/autotest_common.sh@10 -- # set +x 00:05:52.429 ************************************ 00:05:52.429 START TEST accel_dualcast 00:05:52.429 ************************************ 00:05:52.429 06:36:06 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:05:52.429 06:36:06 -- accel/accel.sh@16 -- # local accel_opc 00:05:52.429 06:36:06 -- accel/accel.sh@17 -- # local accel_module 00:05:52.429 06:36:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:05:52.429 06:36:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:52.430 06:36:06 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.430 06:36:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:52.430 06:36:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.430 06:36:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.430 06:36:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:52.430 06:36:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:52.430 06:36:06 -- accel/accel.sh@41 -- # local IFS=, 00:05:52.430 06:36:06 -- accel/accel.sh@42 -- # jq -r . 00:05:52.430 [2024-12-14 06:36:06.222605] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:52.430 [2024-12-14 06:36:06.222695] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56455 ] 00:05:52.430 [2024-12-14 06:36:06.358306] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.430 [2024-12-14 06:36:06.406625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.806 06:36:07 -- accel/accel.sh@18 -- # out=' 00:05:53.806 SPDK Configuration: 00:05:53.806 Core mask: 0x1 00:05:53.806 00:05:53.806 Accel Perf Configuration: 00:05:53.806 Workload Type: dualcast 00:05:53.806 Transfer size: 4096 bytes 00:05:53.806 Vector count 1 00:05:53.806 Module: software 00:05:53.806 Queue depth: 32 00:05:53.806 Allocate depth: 32 00:05:53.806 # threads/core: 1 00:05:53.806 Run time: 1 seconds 00:05:53.806 Verify: Yes 00:05:53.806 00:05:53.806 Running for 1 seconds... 00:05:53.806 00:05:53.806 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:53.806 ------------------------------------------------------------------------------------ 00:05:53.807 0,0 402848/s 1573 MiB/s 0 0 00:05:53.807 ==================================================================================== 00:05:53.807 Total 402848/s 1573 MiB/s 0 0' 00:05:53.807 06:36:07 -- accel/accel.sh@20 -- # IFS=: 00:05:53.807 06:36:07 -- accel/accel.sh@20 -- # read -r var val 00:05:53.807 06:36:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:53.807 06:36:07 -- accel/accel.sh@12 -- # build_accel_config 00:05:53.807 06:36:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:53.807 06:36:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:53.807 06:36:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.807 06:36:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.807 06:36:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:53.807 06:36:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:53.807 06:36:07 -- accel/accel.sh@41 -- # local IFS=, 00:05:53.807 06:36:07 -- accel/accel.sh@42 -- # jq -r . 00:05:53.807 [2024-12-14 06:36:07.585237] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:53.807 [2024-12-14 06:36:07.585328] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56474 ] 00:05:53.807 [2024-12-14 06:36:07.720272] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.807 [2024-12-14 06:36:07.768631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.065 06:36:07 -- accel/accel.sh@21 -- # val= 00:05:54.065 06:36:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.065 06:36:07 -- accel/accel.sh@20 -- # IFS=: 00:05:54.065 06:36:07 -- accel/accel.sh@20 -- # read -r var val 00:05:54.065 06:36:07 -- accel/accel.sh@21 -- # val= 00:05:54.065 06:36:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.065 06:36:07 -- accel/accel.sh@20 -- # IFS=: 00:05:54.065 06:36:07 -- accel/accel.sh@20 -- # read -r var val 00:05:54.065 06:36:07 -- accel/accel.sh@21 -- # val=0x1 00:05:54.065 06:36:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.065 06:36:07 -- accel/accel.sh@20 -- # IFS=: 00:05:54.065 06:36:07 -- accel/accel.sh@20 -- # read -r var val 00:05:54.065 06:36:07 -- accel/accel.sh@21 -- # val= 00:05:54.065 06:36:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.066 06:36:07 -- accel/accel.sh@20 -- # IFS=: 00:05:54.066 06:36:07 -- accel/accel.sh@20 -- # read -r var val 00:05:54.066 06:36:07 -- accel/accel.sh@21 -- # val= 00:05:54.066 06:36:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.066 06:36:07 -- accel/accel.sh@20 -- # IFS=: 00:05:54.066 06:36:07 -- accel/accel.sh@20 -- # read -r var val 00:05:54.066 06:36:07 -- accel/accel.sh@21 -- # val=dualcast 00:05:54.066 06:36:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.066 06:36:07 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:05:54.066 06:36:07 -- accel/accel.sh@20 -- # IFS=: 00:05:54.066 06:36:07 -- accel/accel.sh@20 -- # read -r var val 00:05:54.066 06:36:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:54.066 06:36:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.066 06:36:07 -- accel/accel.sh@20 -- # IFS=: 00:05:54.066 06:36:07 -- accel/accel.sh@20 -- # read -r var val 00:05:54.066 06:36:07 -- accel/accel.sh@21 -- # val= 00:05:54.066 06:36:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.066 06:36:07 -- accel/accel.sh@20 -- # IFS=: 00:05:54.066 06:36:07 -- accel/accel.sh@20 -- # read -r var val 00:05:54.066 06:36:07 -- accel/accel.sh@21 -- # val=software 00:05:54.066 06:36:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.066 06:36:07 -- accel/accel.sh@23 -- # accel_module=software 00:05:54.066 06:36:07 -- accel/accel.sh@20 -- # IFS=: 00:05:54.066 06:36:07 -- accel/accel.sh@20 -- # read -r var val 00:05:54.066 06:36:07 -- accel/accel.sh@21 -- # val=32 00:05:54.066 06:36:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.066 06:36:07 -- accel/accel.sh@20 -- # IFS=: 00:05:54.066 06:36:07 -- accel/accel.sh@20 -- # read -r var val 00:05:54.066 06:36:07 -- accel/accel.sh@21 -- # val=32 00:05:54.066 06:36:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.066 06:36:07 -- accel/accel.sh@20 -- # IFS=: 00:05:54.066 06:36:07 -- accel/accel.sh@20 -- # read -r var val 00:05:54.066 06:36:07 -- accel/accel.sh@21 -- # val=1 00:05:54.066 06:36:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.066 06:36:07 -- accel/accel.sh@20 -- # IFS=: 00:05:54.066 06:36:07 -- accel/accel.sh@20 -- # read -r var val 00:05:54.066 06:36:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:54.066 06:36:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.066 06:36:07 -- accel/accel.sh@20 -- # IFS=: 00:05:54.066 06:36:07 -- accel/accel.sh@20 -- # read -r var val 00:05:54.066 06:36:07 -- accel/accel.sh@21 -- # val=Yes 00:05:54.066 06:36:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.066 06:36:07 -- accel/accel.sh@20 -- # IFS=: 00:05:54.066 06:36:07 -- accel/accel.sh@20 -- # read -r var val 00:05:54.066 06:36:07 -- accel/accel.sh@21 -- # val= 00:05:54.066 06:36:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.066 06:36:07 -- accel/accel.sh@20 -- # IFS=: 00:05:54.066 06:36:07 -- accel/accel.sh@20 -- # read -r var val 00:05:54.066 06:36:07 -- accel/accel.sh@21 -- # val= 00:05:54.066 06:36:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.066 06:36:07 -- accel/accel.sh@20 -- # IFS=: 00:05:54.066 06:36:07 -- accel/accel.sh@20 -- # read -r var val 00:05:55.002 06:36:08 -- accel/accel.sh@21 -- # val= 00:05:55.002 06:36:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.002 06:36:08 -- accel/accel.sh@20 -- # IFS=: 00:05:55.002 06:36:08 -- accel/accel.sh@20 -- # read -r var val 00:05:55.002 06:36:08 -- accel/accel.sh@21 -- # val= 00:05:55.002 06:36:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.002 06:36:08 -- accel/accel.sh@20 -- # IFS=: 00:05:55.002 06:36:08 -- accel/accel.sh@20 -- # read -r var val 00:05:55.002 06:36:08 -- accel/accel.sh@21 -- # val= 00:05:55.002 06:36:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.002 06:36:08 -- accel/accel.sh@20 -- # IFS=: 00:05:55.002 06:36:08 -- accel/accel.sh@20 -- # read -r var val 00:05:55.002 06:36:08 -- accel/accel.sh@21 -- # val= 00:05:55.002 06:36:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.002 06:36:08 -- accel/accel.sh@20 -- # IFS=: 00:05:55.002 06:36:08 -- accel/accel.sh@20 -- # read -r var val 00:05:55.002 06:36:08 -- accel/accel.sh@21 -- # val= 00:05:55.002 06:36:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.002 06:36:08 -- accel/accel.sh@20 -- # IFS=: 00:05:55.002 06:36:08 -- accel/accel.sh@20 -- # read -r var val 00:05:55.002 06:36:08 -- accel/accel.sh@21 -- # val= 00:05:55.002 06:36:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.002 06:36:08 -- accel/accel.sh@20 -- # IFS=: 00:05:55.002 06:36:08 -- accel/accel.sh@20 -- # read -r var val 00:05:55.002 06:36:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:55.002 06:36:08 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:05:55.002 06:36:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.002 00:05:55.002 real 0m2.729s 00:05:55.002 user 0m2.383s 00:05:55.002 sys 0m0.148s 00:05:55.002 06:36:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.002 ************************************ 00:05:55.002 END TEST accel_dualcast 00:05:55.002 ************************************ 00:05:55.002 06:36:08 -- common/autotest_common.sh@10 -- # set +x 00:05:55.002 06:36:08 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:55.002 06:36:08 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:55.002 06:36:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.002 06:36:08 -- common/autotest_common.sh@10 -- # set +x 00:05:55.002 ************************************ 00:05:55.002 START TEST accel_compare 00:05:55.002 ************************************ 00:05:55.002 06:36:08 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:05:55.002 06:36:08 -- accel/accel.sh@16 -- # local accel_opc 00:05:55.002 06:36:08 -- accel/accel.sh@17 -- # local accel_module 00:05:55.002 06:36:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:05:55.002 06:36:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:55.002 06:36:08 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.002 06:36:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:55.002 06:36:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.002 06:36:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.002 06:36:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:55.002 06:36:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:55.002 06:36:08 -- accel/accel.sh@41 -- # local IFS=, 00:05:55.002 06:36:08 -- accel/accel.sh@42 -- # jq -r . 00:05:55.261 [2024-12-14 06:36:09.002721] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:55.261 [2024-12-14 06:36:09.002811] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56509 ] 00:05:55.261 [2024-12-14 06:36:09.141790] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.261 [2024-12-14 06:36:09.208521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.637 06:36:10 -- accel/accel.sh@18 -- # out=' 00:05:56.637 SPDK Configuration: 00:05:56.637 Core mask: 0x1 00:05:56.637 00:05:56.637 Accel Perf Configuration: 00:05:56.637 Workload Type: compare 00:05:56.637 Transfer size: 4096 bytes 00:05:56.637 Vector count 1 00:05:56.637 Module: software 00:05:56.637 Queue depth: 32 00:05:56.637 Allocate depth: 32 00:05:56.637 # threads/core: 1 00:05:56.637 Run time: 1 seconds 00:05:56.637 Verify: Yes 00:05:56.637 00:05:56.637 Running for 1 seconds... 00:05:56.637 00:05:56.637 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:56.637 ------------------------------------------------------------------------------------ 00:05:56.637 0,0 513696/s 2006 MiB/s 0 0 00:05:56.637 ==================================================================================== 00:05:56.637 Total 513696/s 2006 MiB/s 0 0' 00:05:56.637 06:36:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:56.637 06:36:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.637 06:36:10 -- accel/accel.sh@20 -- # read -r var val 00:05:56.638 06:36:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:56.638 06:36:10 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.638 06:36:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.638 06:36:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.638 06:36:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.638 06:36:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.638 06:36:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.638 06:36:10 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.638 06:36:10 -- accel/accel.sh@42 -- # jq -r . 00:05:56.638 [2024-12-14 06:36:10.389846] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:56.638 [2024-12-14 06:36:10.389978] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56523 ] 00:05:56.638 [2024-12-14 06:36:10.522271] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.638 [2024-12-14 06:36:10.571530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.638 06:36:10 -- accel/accel.sh@21 -- # val= 00:05:56.638 06:36:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.638 06:36:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.638 06:36:10 -- accel/accel.sh@20 -- # read -r var val 00:05:56.638 06:36:10 -- accel/accel.sh@21 -- # val= 00:05:56.638 06:36:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.638 06:36:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.638 06:36:10 -- accel/accel.sh@20 -- # read -r var val 00:05:56.638 06:36:10 -- accel/accel.sh@21 -- # val=0x1 00:05:56.638 06:36:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.638 06:36:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.638 06:36:10 -- accel/accel.sh@20 -- # read -r var val 00:05:56.638 06:36:10 -- accel/accel.sh@21 -- # val= 00:05:56.638 06:36:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.638 06:36:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.638 06:36:10 -- accel/accel.sh@20 -- # read -r var val 00:05:56.638 06:36:10 -- accel/accel.sh@21 -- # val= 00:05:56.638 06:36:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.638 06:36:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.638 06:36:10 -- accel/accel.sh@20 -- # read -r var val 00:05:56.638 06:36:10 -- accel/accel.sh@21 -- # val=compare 00:05:56.638 06:36:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.638 06:36:10 -- accel/accel.sh@24 -- # accel_opc=compare 00:05:56.638 06:36:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.638 06:36:10 -- accel/accel.sh@20 -- # read -r var val 00:05:56.638 06:36:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:56.638 06:36:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.638 06:36:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.638 06:36:10 -- accel/accel.sh@20 -- # read -r var val 00:05:56.638 06:36:10 -- accel/accel.sh@21 -- # val= 00:05:56.638 06:36:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.638 06:36:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.638 06:36:10 -- accel/accel.sh@20 -- # read -r var val 00:05:56.638 06:36:10 -- accel/accel.sh@21 -- # val=software 00:05:56.638 06:36:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.638 06:36:10 -- accel/accel.sh@23 -- # accel_module=software 00:05:56.638 06:36:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.638 06:36:10 -- accel/accel.sh@20 -- # read -r var val 00:05:56.638 06:36:10 -- accel/accel.sh@21 -- # val=32 00:05:56.638 06:36:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.638 06:36:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.638 06:36:10 -- accel/accel.sh@20 -- # read -r var val 00:05:56.638 06:36:10 -- accel/accel.sh@21 -- # val=32 00:05:56.638 06:36:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.638 06:36:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.638 06:36:10 -- accel/accel.sh@20 -- # read -r var val 00:05:56.638 06:36:10 -- accel/accel.sh@21 -- # val=1 00:05:56.638 06:36:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.638 06:36:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.638 06:36:10 -- accel/accel.sh@20 -- # read -r var val 00:05:56.638 06:36:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:56.638 06:36:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.638 06:36:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.638 06:36:10 -- accel/accel.sh@20 -- # read -r var val 00:05:56.638 06:36:10 -- accel/accel.sh@21 -- # val=Yes 00:05:56.638 06:36:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.638 06:36:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.638 06:36:10 -- accel/accel.sh@20 -- # read -r var val 00:05:56.638 06:36:10 -- accel/accel.sh@21 -- # val= 00:05:56.638 06:36:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.638 06:36:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.638 06:36:10 -- accel/accel.sh@20 -- # read -r var val 00:05:56.638 06:36:10 -- accel/accel.sh@21 -- # val= 00:05:56.638 06:36:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.638 06:36:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.638 06:36:10 -- accel/accel.sh@20 -- # read -r var val 00:05:58.047 06:36:11 -- accel/accel.sh@21 -- # val= 00:05:58.047 06:36:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.047 06:36:11 -- accel/accel.sh@20 -- # IFS=: 00:05:58.047 06:36:11 -- accel/accel.sh@20 -- # read -r var val 00:05:58.047 06:36:11 -- accel/accel.sh@21 -- # val= 00:05:58.047 06:36:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.047 06:36:11 -- accel/accel.sh@20 -- # IFS=: 00:05:58.047 06:36:11 -- accel/accel.sh@20 -- # read -r var val 00:05:58.047 06:36:11 -- accel/accel.sh@21 -- # val= 00:05:58.047 06:36:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.047 06:36:11 -- accel/accel.sh@20 -- # IFS=: 00:05:58.047 06:36:11 -- accel/accel.sh@20 -- # read -r var val 00:05:58.047 06:36:11 -- accel/accel.sh@21 -- # val= 00:05:58.047 06:36:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.047 06:36:11 -- accel/accel.sh@20 -- # IFS=: 00:05:58.047 06:36:11 -- accel/accel.sh@20 -- # read -r var val 00:05:58.047 06:36:11 -- accel/accel.sh@21 -- # val= 00:05:58.047 06:36:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.047 06:36:11 -- accel/accel.sh@20 -- # IFS=: 00:05:58.047 06:36:11 -- accel/accel.sh@20 -- # read -r var val 00:05:58.047 06:36:11 -- accel/accel.sh@21 -- # val= 00:05:58.047 06:36:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.047 06:36:11 -- accel/accel.sh@20 -- # IFS=: 00:05:58.047 06:36:11 -- accel/accel.sh@20 -- # read -r var val 00:05:58.047 06:36:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:58.047 06:36:11 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:05:58.047 06:36:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.047 00:05:58.047 real 0m2.749s 00:05:58.047 user 0m2.399s 00:05:58.047 sys 0m0.146s 00:05:58.047 06:36:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.047 06:36:11 -- common/autotest_common.sh@10 -- # set +x 00:05:58.047 ************************************ 00:05:58.047 END TEST accel_compare 00:05:58.047 ************************************ 00:05:58.047 06:36:11 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:58.047 06:36:11 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:58.047 06:36:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.047 06:36:11 -- common/autotest_common.sh@10 -- # set +x 00:05:58.047 ************************************ 00:05:58.047 START TEST accel_xor 00:05:58.047 ************************************ 00:05:58.047 06:36:11 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:05:58.047 06:36:11 -- accel/accel.sh@16 -- # local accel_opc 00:05:58.047 06:36:11 -- accel/accel.sh@17 -- # local accel_module 00:05:58.047 06:36:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:05:58.047 06:36:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:58.047 06:36:11 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.047 06:36:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:58.047 06:36:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.047 06:36:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.047 06:36:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:58.047 06:36:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:58.047 06:36:11 -- accel/accel.sh@41 -- # local IFS=, 00:05:58.047 06:36:11 -- accel/accel.sh@42 -- # jq -r . 00:05:58.047 [2024-12-14 06:36:11.808322] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:58.047 [2024-12-14 06:36:11.808419] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56557 ] 00:05:58.047 [2024-12-14 06:36:11.946116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.047 [2024-12-14 06:36:11.993076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.422 06:36:13 -- accel/accel.sh@18 -- # out=' 00:05:59.422 SPDK Configuration: 00:05:59.422 Core mask: 0x1 00:05:59.422 00:05:59.423 Accel Perf Configuration: 00:05:59.423 Workload Type: xor 00:05:59.423 Source buffers: 2 00:05:59.423 Transfer size: 4096 bytes 00:05:59.423 Vector count 1 00:05:59.423 Module: software 00:05:59.423 Queue depth: 32 00:05:59.423 Allocate depth: 32 00:05:59.423 # threads/core: 1 00:05:59.423 Run time: 1 seconds 00:05:59.423 Verify: Yes 00:05:59.423 00:05:59.423 Running for 1 seconds... 00:05:59.423 00:05:59.423 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:59.423 ------------------------------------------------------------------------------------ 00:05:59.423 0,0 286784/s 1120 MiB/s 0 0 00:05:59.423 ==================================================================================== 00:05:59.423 Total 286784/s 1120 MiB/s 0 0' 00:05:59.423 06:36:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.423 06:36:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:59.423 06:36:13 -- accel/accel.sh@12 -- # build_accel_config 00:05:59.423 06:36:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:59.423 06:36:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.423 06:36:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.423 06:36:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:59.423 06:36:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:59.423 06:36:13 -- accel/accel.sh@41 -- # local IFS=, 00:05:59.423 06:36:13 -- accel/accel.sh@42 -- # jq -r . 00:05:59.423 [2024-12-14 06:36:13.166444] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:59.423 [2024-12-14 06:36:13.166542] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56577 ] 00:05:59.423 [2024-12-14 06:36:13.301866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.423 [2024-12-14 06:36:13.348704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.423 06:36:13 -- accel/accel.sh@21 -- # val= 00:05:59.423 06:36:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.423 06:36:13 -- accel/accel.sh@21 -- # val= 00:05:59.423 06:36:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.423 06:36:13 -- accel/accel.sh@21 -- # val=0x1 00:05:59.423 06:36:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.423 06:36:13 -- accel/accel.sh@21 -- # val= 00:05:59.423 06:36:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.423 06:36:13 -- accel/accel.sh@21 -- # val= 00:05:59.423 06:36:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.423 06:36:13 -- accel/accel.sh@21 -- # val=xor 00:05:59.423 06:36:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.423 06:36:13 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.423 06:36:13 -- accel/accel.sh@21 -- # val=2 00:05:59.423 06:36:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.423 06:36:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:59.423 06:36:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.423 06:36:13 -- accel/accel.sh@21 -- # val= 00:05:59.423 06:36:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.423 06:36:13 -- accel/accel.sh@21 -- # val=software 00:05:59.423 06:36:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.423 06:36:13 -- accel/accel.sh@23 -- # accel_module=software 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.423 06:36:13 -- accel/accel.sh@21 -- # val=32 00:05:59.423 06:36:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.423 06:36:13 -- accel/accel.sh@21 -- # val=32 00:05:59.423 06:36:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.423 06:36:13 -- accel/accel.sh@21 -- # val=1 00:05:59.423 06:36:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.423 06:36:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:59.423 06:36:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.423 06:36:13 -- accel/accel.sh@21 -- # val=Yes 00:05:59.423 06:36:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.423 06:36:13 -- accel/accel.sh@21 -- # val= 00:05:59.423 06:36:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.423 06:36:13 -- accel/accel.sh@21 -- # val= 00:05:59.423 06:36:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.423 06:36:13 -- accel/accel.sh@20 -- # read -r var val 00:06:00.798 06:36:14 -- accel/accel.sh@21 -- # val= 00:06:00.799 06:36:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.799 06:36:14 -- accel/accel.sh@20 -- # IFS=: 00:06:00.799 06:36:14 -- accel/accel.sh@20 -- # read -r var val 00:06:00.799 06:36:14 -- accel/accel.sh@21 -- # val= 00:06:00.799 06:36:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.799 06:36:14 -- accel/accel.sh@20 -- # IFS=: 00:06:00.799 06:36:14 -- accel/accel.sh@20 -- # read -r var val 00:06:00.799 06:36:14 -- accel/accel.sh@21 -- # val= 00:06:00.799 06:36:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.799 06:36:14 -- accel/accel.sh@20 -- # IFS=: 00:06:00.799 06:36:14 -- accel/accel.sh@20 -- # read -r var val 00:06:00.799 06:36:14 -- accel/accel.sh@21 -- # val= 00:06:00.799 06:36:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.799 06:36:14 -- accel/accel.sh@20 -- # IFS=: 00:06:00.799 06:36:14 -- accel/accel.sh@20 -- # read -r var val 00:06:00.799 06:36:14 -- accel/accel.sh@21 -- # val= 00:06:00.799 06:36:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.799 06:36:14 -- accel/accel.sh@20 -- # IFS=: 00:06:00.799 06:36:14 -- accel/accel.sh@20 -- # read -r var val 00:06:00.799 06:36:14 -- accel/accel.sh@21 -- # val= 00:06:00.799 06:36:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.799 06:36:14 -- accel/accel.sh@20 -- # IFS=: 00:06:00.799 06:36:14 -- accel/accel.sh@20 -- # read -r var val 00:06:00.799 06:36:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:00.799 06:36:14 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:00.799 06:36:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.799 00:06:00.799 real 0m2.720s 00:06:00.799 user 0m2.385s 00:06:00.799 sys 0m0.131s 00:06:00.799 ************************************ 00:06:00.799 END TEST accel_xor 00:06:00.799 ************************************ 00:06:00.799 06:36:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:00.799 06:36:14 -- common/autotest_common.sh@10 -- # set +x 00:06:00.799 06:36:14 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:00.799 06:36:14 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:00.799 06:36:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.799 06:36:14 -- common/autotest_common.sh@10 -- # set +x 00:06:00.799 ************************************ 00:06:00.799 START TEST accel_xor 00:06:00.799 ************************************ 00:06:00.799 06:36:14 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:00.799 06:36:14 -- accel/accel.sh@16 -- # local accel_opc 00:06:00.799 06:36:14 -- accel/accel.sh@17 -- # local accel_module 00:06:00.799 06:36:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:00.799 06:36:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:00.799 06:36:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.799 06:36:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:00.799 06:36:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.799 06:36:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.799 06:36:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:00.799 06:36:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:00.799 06:36:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:00.799 06:36:14 -- accel/accel.sh@42 -- # jq -r . 00:06:00.799 [2024-12-14 06:36:14.574610] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:00.799 [2024-12-14 06:36:14.574693] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56606 ] 00:06:00.799 [2024-12-14 06:36:14.701312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.799 [2024-12-14 06:36:14.752115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.176 06:36:15 -- accel/accel.sh@18 -- # out=' 00:06:02.176 SPDK Configuration: 00:06:02.176 Core mask: 0x1 00:06:02.176 00:06:02.176 Accel Perf Configuration: 00:06:02.176 Workload Type: xor 00:06:02.176 Source buffers: 3 00:06:02.176 Transfer size: 4096 bytes 00:06:02.176 Vector count 1 00:06:02.176 Module: software 00:06:02.176 Queue depth: 32 00:06:02.176 Allocate depth: 32 00:06:02.176 # threads/core: 1 00:06:02.176 Run time: 1 seconds 00:06:02.176 Verify: Yes 00:06:02.176 00:06:02.176 Running for 1 seconds... 00:06:02.176 00:06:02.176 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:02.176 ------------------------------------------------------------------------------------ 00:06:02.176 0,0 268800/s 1050 MiB/s 0 0 00:06:02.176 ==================================================================================== 00:06:02.176 Total 268800/s 1050 MiB/s 0 0' 00:06:02.176 06:36:15 -- accel/accel.sh@20 -- # IFS=: 00:06:02.176 06:36:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:02.176 06:36:15 -- accel/accel.sh@20 -- # read -r var val 00:06:02.176 06:36:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:02.176 06:36:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.176 06:36:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:02.176 06:36:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.176 06:36:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.176 06:36:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:02.176 06:36:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:02.176 06:36:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:02.176 06:36:15 -- accel/accel.sh@42 -- # jq -r . 00:06:02.176 [2024-12-14 06:36:15.922661] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:02.176 [2024-12-14 06:36:15.922758] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56625 ] 00:06:02.176 [2024-12-14 06:36:16.048945] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.176 [2024-12-14 06:36:16.096316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.176 06:36:16 -- accel/accel.sh@21 -- # val= 00:06:02.176 06:36:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.176 06:36:16 -- accel/accel.sh@21 -- # val= 00:06:02.176 06:36:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.176 06:36:16 -- accel/accel.sh@21 -- # val=0x1 00:06:02.176 06:36:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.176 06:36:16 -- accel/accel.sh@21 -- # val= 00:06:02.176 06:36:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.176 06:36:16 -- accel/accel.sh@21 -- # val= 00:06:02.176 06:36:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.176 06:36:16 -- accel/accel.sh@21 -- # val=xor 00:06:02.176 06:36:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.176 06:36:16 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.176 06:36:16 -- accel/accel.sh@21 -- # val=3 00:06:02.176 06:36:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.176 06:36:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:02.176 06:36:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.176 06:36:16 -- accel/accel.sh@21 -- # val= 00:06:02.176 06:36:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.176 06:36:16 -- accel/accel.sh@21 -- # val=software 00:06:02.176 06:36:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.176 06:36:16 -- accel/accel.sh@23 -- # accel_module=software 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.176 06:36:16 -- accel/accel.sh@21 -- # val=32 00:06:02.176 06:36:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.176 06:36:16 -- accel/accel.sh@21 -- # val=32 00:06:02.176 06:36:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.176 06:36:16 -- accel/accel.sh@21 -- # val=1 00:06:02.176 06:36:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.176 06:36:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:02.176 06:36:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.176 06:36:16 -- accel/accel.sh@21 -- # val=Yes 00:06:02.176 06:36:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.176 06:36:16 -- accel/accel.sh@21 -- # val= 00:06:02.176 06:36:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.176 06:36:16 -- accel/accel.sh@21 -- # val= 00:06:02.176 06:36:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.176 06:36:16 -- accel/accel.sh@20 -- # read -r var val 00:06:03.551 06:36:17 -- accel/accel.sh@21 -- # val= 00:06:03.551 06:36:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.551 06:36:17 -- accel/accel.sh@20 -- # IFS=: 00:06:03.551 06:36:17 -- accel/accel.sh@20 -- # read -r var val 00:06:03.551 06:36:17 -- accel/accel.sh@21 -- # val= 00:06:03.551 06:36:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.551 06:36:17 -- accel/accel.sh@20 -- # IFS=: 00:06:03.551 06:36:17 -- accel/accel.sh@20 -- # read -r var val 00:06:03.551 06:36:17 -- accel/accel.sh@21 -- # val= 00:06:03.551 06:36:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.551 06:36:17 -- accel/accel.sh@20 -- # IFS=: 00:06:03.551 06:36:17 -- accel/accel.sh@20 -- # read -r var val 00:06:03.551 06:36:17 -- accel/accel.sh@21 -- # val= 00:06:03.551 06:36:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.551 06:36:17 -- accel/accel.sh@20 -- # IFS=: 00:06:03.551 06:36:17 -- accel/accel.sh@20 -- # read -r var val 00:06:03.551 06:36:17 -- accel/accel.sh@21 -- # val= 00:06:03.551 06:36:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.551 06:36:17 -- accel/accel.sh@20 -- # IFS=: 00:06:03.551 06:36:17 -- accel/accel.sh@20 -- # read -r var val 00:06:03.551 06:36:17 -- accel/accel.sh@21 -- # val= 00:06:03.551 06:36:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.551 06:36:17 -- accel/accel.sh@20 -- # IFS=: 00:06:03.551 06:36:17 -- accel/accel.sh@20 -- # read -r var val 00:06:03.551 06:36:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:03.551 ************************************ 00:06:03.551 END TEST accel_xor 00:06:03.551 ************************************ 00:06:03.551 06:36:17 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:03.551 06:36:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.551 00:06:03.551 real 0m2.707s 00:06:03.551 user 0m2.383s 00:06:03.551 sys 0m0.118s 00:06:03.551 06:36:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:03.551 06:36:17 -- common/autotest_common.sh@10 -- # set +x 00:06:03.551 06:36:17 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:03.551 06:36:17 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:03.551 06:36:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.551 06:36:17 -- common/autotest_common.sh@10 -- # set +x 00:06:03.551 ************************************ 00:06:03.551 START TEST accel_dif_verify 00:06:03.551 ************************************ 00:06:03.551 06:36:17 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:03.551 06:36:17 -- accel/accel.sh@16 -- # local accel_opc 00:06:03.551 06:36:17 -- accel/accel.sh@17 -- # local accel_module 00:06:03.551 06:36:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:03.551 06:36:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:03.551 06:36:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:03.551 06:36:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:03.552 06:36:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.552 06:36:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.552 06:36:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:03.552 06:36:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:03.552 06:36:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:03.552 06:36:17 -- accel/accel.sh@42 -- # jq -r . 00:06:03.552 [2024-12-14 06:36:17.346402] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:03.552 [2024-12-14 06:36:17.346694] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56660 ] 00:06:03.552 [2024-12-14 06:36:17.493865] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.810 [2024-12-14 06:36:17.541980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.745 06:36:18 -- accel/accel.sh@18 -- # out=' 00:06:04.745 SPDK Configuration: 00:06:04.745 Core mask: 0x1 00:06:04.745 00:06:04.745 Accel Perf Configuration: 00:06:04.745 Workload Type: dif_verify 00:06:04.745 Vector size: 4096 bytes 00:06:04.745 Transfer size: 4096 bytes 00:06:04.745 Block size: 512 bytes 00:06:04.745 Metadata size: 8 bytes 00:06:04.745 Vector count 1 00:06:04.745 Module: software 00:06:04.745 Queue depth: 32 00:06:04.745 Allocate depth: 32 00:06:04.745 # threads/core: 1 00:06:04.745 Run time: 1 seconds 00:06:04.745 Verify: No 00:06:04.745 00:06:04.745 Running for 1 seconds... 00:06:04.745 00:06:04.745 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:04.745 ------------------------------------------------------------------------------------ 00:06:04.745 0,0 118592/s 470 MiB/s 0 0 00:06:04.745 ==================================================================================== 00:06:04.745 Total 118592/s 463 MiB/s 0 0' 00:06:04.745 06:36:18 -- accel/accel.sh@20 -- # IFS=: 00:06:04.745 06:36:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:04.745 06:36:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:04.745 06:36:18 -- accel/accel.sh@20 -- # read -r var val 00:06:04.745 06:36:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.745 06:36:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:04.745 06:36:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.745 06:36:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.745 06:36:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:04.745 06:36:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:04.745 06:36:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:04.745 06:36:18 -- accel/accel.sh@42 -- # jq -r . 00:06:04.745 [2024-12-14 06:36:18.715264] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:04.745 [2024-12-14 06:36:18.715354] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56674 ] 00:06:05.004 [2024-12-14 06:36:18.849093] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.004 [2024-12-14 06:36:18.905428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.004 06:36:18 -- accel/accel.sh@21 -- # val= 00:06:05.004 06:36:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # IFS=: 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # read -r var val 00:06:05.004 06:36:18 -- accel/accel.sh@21 -- # val= 00:06:05.004 06:36:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # IFS=: 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # read -r var val 00:06:05.004 06:36:18 -- accel/accel.sh@21 -- # val=0x1 00:06:05.004 06:36:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # IFS=: 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # read -r var val 00:06:05.004 06:36:18 -- accel/accel.sh@21 -- # val= 00:06:05.004 06:36:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # IFS=: 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # read -r var val 00:06:05.004 06:36:18 -- accel/accel.sh@21 -- # val= 00:06:05.004 06:36:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # IFS=: 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # read -r var val 00:06:05.004 06:36:18 -- accel/accel.sh@21 -- # val=dif_verify 00:06:05.004 06:36:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.004 06:36:18 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # IFS=: 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # read -r var val 00:06:05.004 06:36:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:05.004 06:36:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # IFS=: 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # read -r var val 00:06:05.004 06:36:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:05.004 06:36:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # IFS=: 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # read -r var val 00:06:05.004 06:36:18 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:05.004 06:36:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # IFS=: 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # read -r var val 00:06:05.004 06:36:18 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:05.004 06:36:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # IFS=: 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # read -r var val 00:06:05.004 06:36:18 -- accel/accel.sh@21 -- # val= 00:06:05.004 06:36:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # IFS=: 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # read -r var val 00:06:05.004 06:36:18 -- accel/accel.sh@21 -- # val=software 00:06:05.004 06:36:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.004 06:36:18 -- accel/accel.sh@23 -- # accel_module=software 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # IFS=: 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # read -r var val 00:06:05.004 06:36:18 -- accel/accel.sh@21 -- # val=32 00:06:05.004 06:36:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # IFS=: 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # read -r var val 00:06:05.004 06:36:18 -- accel/accel.sh@21 -- # val=32 00:06:05.004 06:36:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # IFS=: 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # read -r var val 00:06:05.004 06:36:18 -- accel/accel.sh@21 -- # val=1 00:06:05.004 06:36:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # IFS=: 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # read -r var val 00:06:05.004 06:36:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:05.004 06:36:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # IFS=: 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # read -r var val 00:06:05.004 06:36:18 -- accel/accel.sh@21 -- # val=No 00:06:05.004 06:36:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # IFS=: 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # read -r var val 00:06:05.004 06:36:18 -- accel/accel.sh@21 -- # val= 00:06:05.004 06:36:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # IFS=: 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # read -r var val 00:06:05.004 06:36:18 -- accel/accel.sh@21 -- # val= 00:06:05.004 06:36:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # IFS=: 00:06:05.004 06:36:18 -- accel/accel.sh@20 -- # read -r var val 00:06:06.380 06:36:20 -- accel/accel.sh@21 -- # val= 00:06:06.380 06:36:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.380 06:36:20 -- accel/accel.sh@20 -- # IFS=: 00:06:06.380 06:36:20 -- accel/accel.sh@20 -- # read -r var val 00:06:06.380 06:36:20 -- accel/accel.sh@21 -- # val= 00:06:06.380 06:36:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.380 06:36:20 -- accel/accel.sh@20 -- # IFS=: 00:06:06.380 06:36:20 -- accel/accel.sh@20 -- # read -r var val 00:06:06.380 06:36:20 -- accel/accel.sh@21 -- # val= 00:06:06.380 06:36:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.380 06:36:20 -- accel/accel.sh@20 -- # IFS=: 00:06:06.380 06:36:20 -- accel/accel.sh@20 -- # read -r var val 00:06:06.380 06:36:20 -- accel/accel.sh@21 -- # val= 00:06:06.380 06:36:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.380 06:36:20 -- accel/accel.sh@20 -- # IFS=: 00:06:06.380 06:36:20 -- accel/accel.sh@20 -- # read -r var val 00:06:06.380 06:36:20 -- accel/accel.sh@21 -- # val= 00:06:06.380 06:36:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.380 06:36:20 -- accel/accel.sh@20 -- # IFS=: 00:06:06.380 06:36:20 -- accel/accel.sh@20 -- # read -r var val 00:06:06.380 06:36:20 -- accel/accel.sh@21 -- # val= 00:06:06.380 06:36:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.380 06:36:20 -- accel/accel.sh@20 -- # IFS=: 00:06:06.380 06:36:20 -- accel/accel.sh@20 -- # read -r var val 00:06:06.380 06:36:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:06.380 06:36:20 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:06.380 06:36:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.380 00:06:06.380 real 0m2.754s 00:06:06.380 user 0m2.399s 00:06:06.380 sys 0m0.152s 00:06:06.380 06:36:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:06.380 06:36:20 -- common/autotest_common.sh@10 -- # set +x 00:06:06.380 ************************************ 00:06:06.380 END TEST accel_dif_verify 00:06:06.380 ************************************ 00:06:06.380 06:36:20 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:06.380 06:36:20 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:06.380 06:36:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.380 06:36:20 -- common/autotest_common.sh@10 -- # set +x 00:06:06.380 ************************************ 00:06:06.380 START TEST accel_dif_generate 00:06:06.380 ************************************ 00:06:06.380 06:36:20 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:06.380 06:36:20 -- accel/accel.sh@16 -- # local accel_opc 00:06:06.380 06:36:20 -- accel/accel.sh@17 -- # local accel_module 00:06:06.380 06:36:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:06.380 06:36:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:06.380 06:36:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.380 06:36:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.380 06:36:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.380 06:36:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.380 06:36:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.380 06:36:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.380 06:36:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.380 06:36:20 -- accel/accel.sh@42 -- # jq -r . 00:06:06.380 [2024-12-14 06:36:20.141866] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:06.380 [2024-12-14 06:36:20.142007] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56708 ] 00:06:06.380 [2024-12-14 06:36:20.268656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.380 [2024-12-14 06:36:20.316907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.756 06:36:21 -- accel/accel.sh@18 -- # out=' 00:06:07.756 SPDK Configuration: 00:06:07.756 Core mask: 0x1 00:06:07.756 00:06:07.756 Accel Perf Configuration: 00:06:07.756 Workload Type: dif_generate 00:06:07.756 Vector size: 4096 bytes 00:06:07.756 Transfer size: 4096 bytes 00:06:07.756 Block size: 512 bytes 00:06:07.756 Metadata size: 8 bytes 00:06:07.756 Vector count 1 00:06:07.756 Module: software 00:06:07.756 Queue depth: 32 00:06:07.756 Allocate depth: 32 00:06:07.756 # threads/core: 1 00:06:07.756 Run time: 1 seconds 00:06:07.756 Verify: No 00:06:07.756 00:06:07.756 Running for 1 seconds... 00:06:07.756 00:06:07.756 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:07.756 ------------------------------------------------------------------------------------ 00:06:07.756 0,0 143072/s 567 MiB/s 0 0 00:06:07.756 ==================================================================================== 00:06:07.756 Total 143072/s 558 MiB/s 0 0' 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # IFS=: 00:06:07.756 06:36:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # read -r var val 00:06:07.756 06:36:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:07.756 06:36:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.756 06:36:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:07.756 06:36:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.756 06:36:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.756 06:36:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:07.756 06:36:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:07.756 06:36:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:07.756 06:36:21 -- accel/accel.sh@42 -- # jq -r . 00:06:07.756 [2024-12-14 06:36:21.490869] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:07.756 [2024-12-14 06:36:21.490973] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56728 ] 00:06:07.756 [2024-12-14 06:36:21.625258] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.756 [2024-12-14 06:36:21.678078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.756 06:36:21 -- accel/accel.sh@21 -- # val= 00:06:07.756 06:36:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # IFS=: 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # read -r var val 00:06:07.756 06:36:21 -- accel/accel.sh@21 -- # val= 00:06:07.756 06:36:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # IFS=: 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # read -r var val 00:06:07.756 06:36:21 -- accel/accel.sh@21 -- # val=0x1 00:06:07.756 06:36:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # IFS=: 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # read -r var val 00:06:07.756 06:36:21 -- accel/accel.sh@21 -- # val= 00:06:07.756 06:36:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # IFS=: 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # read -r var val 00:06:07.756 06:36:21 -- accel/accel.sh@21 -- # val= 00:06:07.756 06:36:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # IFS=: 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # read -r var val 00:06:07.756 06:36:21 -- accel/accel.sh@21 -- # val=dif_generate 00:06:07.756 06:36:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.756 06:36:21 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # IFS=: 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # read -r var val 00:06:07.756 06:36:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:07.756 06:36:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # IFS=: 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # read -r var val 00:06:07.756 06:36:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:07.756 06:36:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # IFS=: 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # read -r var val 00:06:07.756 06:36:21 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:07.756 06:36:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # IFS=: 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # read -r var val 00:06:07.756 06:36:21 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:07.756 06:36:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # IFS=: 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # read -r var val 00:06:07.756 06:36:21 -- accel/accel.sh@21 -- # val= 00:06:07.756 06:36:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # IFS=: 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # read -r var val 00:06:07.756 06:36:21 -- accel/accel.sh@21 -- # val=software 00:06:07.756 06:36:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.756 06:36:21 -- accel/accel.sh@23 -- # accel_module=software 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # IFS=: 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # read -r var val 00:06:07.756 06:36:21 -- accel/accel.sh@21 -- # val=32 00:06:07.756 06:36:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # IFS=: 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # read -r var val 00:06:07.756 06:36:21 -- accel/accel.sh@21 -- # val=32 00:06:07.756 06:36:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # IFS=: 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # read -r var val 00:06:07.756 06:36:21 -- accel/accel.sh@21 -- # val=1 00:06:07.756 06:36:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # IFS=: 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # read -r var val 00:06:07.756 06:36:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:07.756 06:36:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # IFS=: 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # read -r var val 00:06:07.756 06:36:21 -- accel/accel.sh@21 -- # val=No 00:06:07.756 06:36:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # IFS=: 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # read -r var val 00:06:07.756 06:36:21 -- accel/accel.sh@21 -- # val= 00:06:07.756 06:36:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # IFS=: 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # read -r var val 00:06:07.756 06:36:21 -- accel/accel.sh@21 -- # val= 00:06:07.756 06:36:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # IFS=: 00:06:07.756 06:36:21 -- accel/accel.sh@20 -- # read -r var val 00:06:09.131 06:36:22 -- accel/accel.sh@21 -- # val= 00:06:09.131 06:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.131 06:36:22 -- accel/accel.sh@20 -- # IFS=: 00:06:09.131 06:36:22 -- accel/accel.sh@20 -- # read -r var val 00:06:09.131 06:36:22 -- accel/accel.sh@21 -- # val= 00:06:09.131 06:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.131 06:36:22 -- accel/accel.sh@20 -- # IFS=: 00:06:09.131 06:36:22 -- accel/accel.sh@20 -- # read -r var val 00:06:09.131 06:36:22 -- accel/accel.sh@21 -- # val= 00:06:09.131 06:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.131 06:36:22 -- accel/accel.sh@20 -- # IFS=: 00:06:09.131 06:36:22 -- accel/accel.sh@20 -- # read -r var val 00:06:09.131 06:36:22 -- accel/accel.sh@21 -- # val= 00:06:09.131 06:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.131 06:36:22 -- accel/accel.sh@20 -- # IFS=: 00:06:09.131 06:36:22 -- accel/accel.sh@20 -- # read -r var val 00:06:09.131 06:36:22 -- accel/accel.sh@21 -- # val= 00:06:09.131 06:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.131 06:36:22 -- accel/accel.sh@20 -- # IFS=: 00:06:09.131 06:36:22 -- accel/accel.sh@20 -- # read -r var val 00:06:09.131 06:36:22 -- accel/accel.sh@21 -- # val= 00:06:09.131 06:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.131 06:36:22 -- accel/accel.sh@20 -- # IFS=: 00:06:09.131 06:36:22 -- accel/accel.sh@20 -- # read -r var val 00:06:09.131 06:36:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:09.131 06:36:22 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:09.131 06:36:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.131 00:06:09.131 real 0m2.716s 00:06:09.131 user 0m2.390s 00:06:09.131 sys 0m0.128s 00:06:09.131 06:36:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:09.131 06:36:22 -- common/autotest_common.sh@10 -- # set +x 00:06:09.131 ************************************ 00:06:09.131 END TEST accel_dif_generate 00:06:09.131 ************************************ 00:06:09.131 06:36:22 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:09.131 06:36:22 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:09.131 06:36:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.131 06:36:22 -- common/autotest_common.sh@10 -- # set +x 00:06:09.131 ************************************ 00:06:09.131 START TEST accel_dif_generate_copy 00:06:09.131 ************************************ 00:06:09.131 06:36:22 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:06:09.131 06:36:22 -- accel/accel.sh@16 -- # local accel_opc 00:06:09.131 06:36:22 -- accel/accel.sh@17 -- # local accel_module 00:06:09.131 06:36:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:09.131 06:36:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:09.131 06:36:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.131 06:36:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:09.131 06:36:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.131 06:36:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.131 06:36:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:09.131 06:36:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:09.131 06:36:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:09.131 06:36:22 -- accel/accel.sh@42 -- # jq -r . 00:06:09.131 [2024-12-14 06:36:22.915451] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:09.131 [2024-12-14 06:36:22.915533] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56757 ] 00:06:09.131 [2024-12-14 06:36:23.044055] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.131 [2024-12-14 06:36:23.094850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.506 06:36:24 -- accel/accel.sh@18 -- # out=' 00:06:10.506 SPDK Configuration: 00:06:10.506 Core mask: 0x1 00:06:10.506 00:06:10.506 Accel Perf Configuration: 00:06:10.506 Workload Type: dif_generate_copy 00:06:10.506 Vector size: 4096 bytes 00:06:10.506 Transfer size: 4096 bytes 00:06:10.506 Vector count 1 00:06:10.506 Module: software 00:06:10.506 Queue depth: 32 00:06:10.506 Allocate depth: 32 00:06:10.506 # threads/core: 1 00:06:10.506 Run time: 1 seconds 00:06:10.506 Verify: No 00:06:10.506 00:06:10.506 Running for 1 seconds... 00:06:10.506 00:06:10.506 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:10.506 ------------------------------------------------------------------------------------ 00:06:10.506 0,0 111360/s 441 MiB/s 0 0 00:06:10.506 ==================================================================================== 00:06:10.507 Total 111360/s 435 MiB/s 0 0' 00:06:10.507 06:36:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:10.507 06:36:24 -- accel/accel.sh@20 -- # IFS=: 00:06:10.507 06:36:24 -- accel/accel.sh@20 -- # read -r var val 00:06:10.507 06:36:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:10.507 06:36:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.507 06:36:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.507 06:36:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.507 06:36:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.507 06:36:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.507 06:36:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.507 06:36:24 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.507 06:36:24 -- accel/accel.sh@42 -- # jq -r . 00:06:10.507 [2024-12-14 06:36:24.276996] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:10.507 [2024-12-14 06:36:24.277085] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56776 ] 00:06:10.507 [2024-12-14 06:36:24.413778] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.507 [2024-12-14 06:36:24.460547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.507 06:36:24 -- accel/accel.sh@21 -- # val= 00:06:10.507 06:36:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.507 06:36:24 -- accel/accel.sh@20 -- # IFS=: 00:06:10.507 06:36:24 -- accel/accel.sh@20 -- # read -r var val 00:06:10.507 06:36:24 -- accel/accel.sh@21 -- # val= 00:06:10.507 06:36:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.507 06:36:24 -- accel/accel.sh@20 -- # IFS=: 00:06:10.507 06:36:24 -- accel/accel.sh@20 -- # read -r var val 00:06:10.507 06:36:24 -- accel/accel.sh@21 -- # val=0x1 00:06:10.507 06:36:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.507 06:36:24 -- accel/accel.sh@20 -- # IFS=: 00:06:10.507 06:36:24 -- accel/accel.sh@20 -- # read -r var val 00:06:10.507 06:36:24 -- accel/accel.sh@21 -- # val= 00:06:10.507 06:36:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.507 06:36:24 -- accel/accel.sh@20 -- # IFS=: 00:06:10.507 06:36:24 -- accel/accel.sh@20 -- # read -r var val 00:06:10.507 06:36:24 -- accel/accel.sh@21 -- # val= 00:06:10.507 06:36:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.507 06:36:24 -- accel/accel.sh@20 -- # IFS=: 00:06:10.507 06:36:24 -- accel/accel.sh@20 -- # read -r var val 00:06:10.507 06:36:24 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:10.507 06:36:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.507 06:36:24 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:10.507 06:36:24 -- accel/accel.sh@20 -- # IFS=: 00:06:10.507 06:36:24 -- accel/accel.sh@20 -- # read -r var val 00:06:10.507 06:36:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:10.507 06:36:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.507 06:36:24 -- accel/accel.sh@20 -- # IFS=: 00:06:10.507 06:36:24 -- accel/accel.sh@20 -- # read -r var val 00:06:10.765 06:36:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:10.765 06:36:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.765 06:36:24 -- accel/accel.sh@20 -- # IFS=: 00:06:10.765 06:36:24 -- accel/accel.sh@20 -- # read -r var val 00:06:10.765 06:36:24 -- accel/accel.sh@21 -- # val= 00:06:10.765 06:36:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.765 06:36:24 -- accel/accel.sh@20 -- # IFS=: 00:06:10.765 06:36:24 -- accel/accel.sh@20 -- # read -r var val 00:06:10.765 06:36:24 -- accel/accel.sh@21 -- # val=software 00:06:10.765 06:36:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.765 06:36:24 -- accel/accel.sh@23 -- # accel_module=software 00:06:10.765 06:36:24 -- accel/accel.sh@20 -- # IFS=: 00:06:10.765 06:36:24 -- accel/accel.sh@20 -- # read -r var val 00:06:10.765 06:36:24 -- accel/accel.sh@21 -- # val=32 00:06:10.765 06:36:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.765 06:36:24 -- accel/accel.sh@20 -- # IFS=: 00:06:10.765 06:36:24 -- accel/accel.sh@20 -- # read -r var val 00:06:10.765 06:36:24 -- accel/accel.sh@21 -- # val=32 00:06:10.765 06:36:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.765 06:36:24 -- accel/accel.sh@20 -- # IFS=: 00:06:10.765 06:36:24 -- accel/accel.sh@20 -- # read -r var val 00:06:10.765 06:36:24 -- accel/accel.sh@21 -- # val=1 00:06:10.765 06:36:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.765 06:36:24 -- accel/accel.sh@20 -- # IFS=: 00:06:10.765 06:36:24 -- accel/accel.sh@20 -- # read -r var val 00:06:10.765 06:36:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:10.765 06:36:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.765 06:36:24 -- accel/accel.sh@20 -- # IFS=: 00:06:10.765 06:36:24 -- accel/accel.sh@20 -- # read -r var val 00:06:10.765 06:36:24 -- accel/accel.sh@21 -- # val=No 00:06:10.765 06:36:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.765 06:36:24 -- accel/accel.sh@20 -- # IFS=: 00:06:10.765 06:36:24 -- accel/accel.sh@20 -- # read -r var val 00:06:10.765 06:36:24 -- accel/accel.sh@21 -- # val= 00:06:10.765 06:36:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.765 06:36:24 -- accel/accel.sh@20 -- # IFS=: 00:06:10.765 06:36:24 -- accel/accel.sh@20 -- # read -r var val 00:06:10.765 06:36:24 -- accel/accel.sh@21 -- # val= 00:06:10.765 06:36:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.765 06:36:24 -- accel/accel.sh@20 -- # IFS=: 00:06:10.765 06:36:24 -- accel/accel.sh@20 -- # read -r var val 00:06:11.702 06:36:25 -- accel/accel.sh@21 -- # val= 00:06:11.702 06:36:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.702 06:36:25 -- accel/accel.sh@20 -- # IFS=: 00:06:11.702 06:36:25 -- accel/accel.sh@20 -- # read -r var val 00:06:11.702 06:36:25 -- accel/accel.sh@21 -- # val= 00:06:11.702 06:36:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.702 06:36:25 -- accel/accel.sh@20 -- # IFS=: 00:06:11.702 06:36:25 -- accel/accel.sh@20 -- # read -r var val 00:06:11.702 06:36:25 -- accel/accel.sh@21 -- # val= 00:06:11.702 06:36:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.702 06:36:25 -- accel/accel.sh@20 -- # IFS=: 00:06:11.702 06:36:25 -- accel/accel.sh@20 -- # read -r var val 00:06:11.702 06:36:25 -- accel/accel.sh@21 -- # val= 00:06:11.702 06:36:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.702 06:36:25 -- accel/accel.sh@20 -- # IFS=: 00:06:11.702 06:36:25 -- accel/accel.sh@20 -- # read -r var val 00:06:11.702 06:36:25 -- accel/accel.sh@21 -- # val= 00:06:11.702 06:36:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.702 06:36:25 -- accel/accel.sh@20 -- # IFS=: 00:06:11.702 06:36:25 -- accel/accel.sh@20 -- # read -r var val 00:06:11.702 06:36:25 -- accel/accel.sh@21 -- # val= 00:06:11.702 06:36:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.702 06:36:25 -- accel/accel.sh@20 -- # IFS=: 00:06:11.702 06:36:25 -- accel/accel.sh@20 -- # read -r var val 00:06:11.702 06:36:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:11.702 06:36:25 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:11.702 06:36:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.702 00:06:11.702 real 0m2.722s 00:06:11.702 user 0m2.388s 00:06:11.702 sys 0m0.133s 00:06:11.702 06:36:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.702 ************************************ 00:06:11.702 END TEST accel_dif_generate_copy 00:06:11.702 ************************************ 00:06:11.702 06:36:25 -- common/autotest_common.sh@10 -- # set +x 00:06:11.702 06:36:25 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:11.702 06:36:25 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:11.702 06:36:25 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:11.702 06:36:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.702 06:36:25 -- common/autotest_common.sh@10 -- # set +x 00:06:11.702 ************************************ 00:06:11.702 START TEST accel_comp 00:06:11.702 ************************************ 00:06:11.702 06:36:25 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:11.702 06:36:25 -- accel/accel.sh@16 -- # local accel_opc 00:06:11.702 06:36:25 -- accel/accel.sh@17 -- # local accel_module 00:06:11.702 06:36:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:11.702 06:36:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:11.702 06:36:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:11.702 06:36:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:11.702 06:36:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.702 06:36:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.702 06:36:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:11.702 06:36:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:11.702 06:36:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:11.702 06:36:25 -- accel/accel.sh@42 -- # jq -r . 00:06:11.702 [2024-12-14 06:36:25.684815] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:11.702 [2024-12-14 06:36:25.684940] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56811 ] 00:06:11.960 [2024-12-14 06:36:25.821708] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.960 [2024-12-14 06:36:25.872355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.337 06:36:27 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:13.337 00:06:13.337 SPDK Configuration: 00:06:13.337 Core mask: 0x1 00:06:13.337 00:06:13.337 Accel Perf Configuration: 00:06:13.337 Workload Type: compress 00:06:13.337 Transfer size: 4096 bytes 00:06:13.337 Vector count 1 00:06:13.337 Module: software 00:06:13.337 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:13.337 Queue depth: 32 00:06:13.337 Allocate depth: 32 00:06:13.337 # threads/core: 1 00:06:13.337 Run time: 1 seconds 00:06:13.337 Verify: No 00:06:13.337 00:06:13.337 Running for 1 seconds... 00:06:13.337 00:06:13.337 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:13.337 ------------------------------------------------------------------------------------ 00:06:13.337 0,0 56416/s 235 MiB/s 0 0 00:06:13.337 ==================================================================================== 00:06:13.337 Total 56416/s 220 MiB/s 0 0' 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.337 06:36:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:13.337 06:36:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.337 06:36:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:13.337 06:36:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.337 06:36:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.337 06:36:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.337 06:36:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.337 06:36:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.337 06:36:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.337 06:36:27 -- accel/accel.sh@42 -- # jq -r . 00:06:13.337 [2024-12-14 06:36:27.061234] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:13.337 [2024-12-14 06:36:27.061322] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56825 ] 00:06:13.337 [2024-12-14 06:36:27.196038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.337 [2024-12-14 06:36:27.252295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.337 06:36:27 -- accel/accel.sh@21 -- # val= 00:06:13.337 06:36:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.337 06:36:27 -- accel/accel.sh@21 -- # val= 00:06:13.337 06:36:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.337 06:36:27 -- accel/accel.sh@21 -- # val= 00:06:13.337 06:36:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.337 06:36:27 -- accel/accel.sh@21 -- # val=0x1 00:06:13.337 06:36:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.337 06:36:27 -- accel/accel.sh@21 -- # val= 00:06:13.337 06:36:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.337 06:36:27 -- accel/accel.sh@21 -- # val= 00:06:13.337 06:36:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.337 06:36:27 -- accel/accel.sh@21 -- # val=compress 00:06:13.337 06:36:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.337 06:36:27 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.337 06:36:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:13.337 06:36:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.337 06:36:27 -- accel/accel.sh@21 -- # val= 00:06:13.337 06:36:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.337 06:36:27 -- accel/accel.sh@21 -- # val=software 00:06:13.337 06:36:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.337 06:36:27 -- accel/accel.sh@23 -- # accel_module=software 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.337 06:36:27 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:13.337 06:36:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.337 06:36:27 -- accel/accel.sh@21 -- # val=32 00:06:13.337 06:36:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.337 06:36:27 -- accel/accel.sh@21 -- # val=32 00:06:13.337 06:36:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.337 06:36:27 -- accel/accel.sh@21 -- # val=1 00:06:13.337 06:36:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.337 06:36:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:13.337 06:36:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.337 06:36:27 -- accel/accel.sh@21 -- # val=No 00:06:13.337 06:36:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.337 06:36:27 -- accel/accel.sh@21 -- # val= 00:06:13.337 06:36:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.337 06:36:27 -- accel/accel.sh@21 -- # val= 00:06:13.337 06:36:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.337 06:36:27 -- accel/accel.sh@20 -- # read -r var val 00:06:14.716 06:36:28 -- accel/accel.sh@21 -- # val= 00:06:14.716 06:36:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.716 06:36:28 -- accel/accel.sh@20 -- # IFS=: 00:06:14.716 06:36:28 -- accel/accel.sh@20 -- # read -r var val 00:06:14.716 06:36:28 -- accel/accel.sh@21 -- # val= 00:06:14.716 06:36:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.716 06:36:28 -- accel/accel.sh@20 -- # IFS=: 00:06:14.716 06:36:28 -- accel/accel.sh@20 -- # read -r var val 00:06:14.716 06:36:28 -- accel/accel.sh@21 -- # val= 00:06:14.716 06:36:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.716 06:36:28 -- accel/accel.sh@20 -- # IFS=: 00:06:14.716 06:36:28 -- accel/accel.sh@20 -- # read -r var val 00:06:14.716 06:36:28 -- accel/accel.sh@21 -- # val= 00:06:14.716 06:36:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.716 06:36:28 -- accel/accel.sh@20 -- # IFS=: 00:06:14.716 06:36:28 -- accel/accel.sh@20 -- # read -r var val 00:06:14.716 06:36:28 -- accel/accel.sh@21 -- # val= 00:06:14.716 06:36:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.716 06:36:28 -- accel/accel.sh@20 -- # IFS=: 00:06:14.716 06:36:28 -- accel/accel.sh@20 -- # read -r var val 00:06:14.716 06:36:28 -- accel/accel.sh@21 -- # val= 00:06:14.716 06:36:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.716 06:36:28 -- accel/accel.sh@20 -- # IFS=: 00:06:14.716 06:36:28 -- accel/accel.sh@20 -- # read -r var val 00:06:14.716 06:36:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:14.716 06:36:28 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:14.716 06:36:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.716 00:06:14.716 real 0m2.757s 00:06:14.716 user 0m2.410s 00:06:14.716 sys 0m0.142s 00:06:14.716 06:36:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:14.716 ************************************ 00:06:14.716 END TEST accel_comp 00:06:14.716 ************************************ 00:06:14.716 06:36:28 -- common/autotest_common.sh@10 -- # set +x 00:06:14.716 06:36:28 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:14.716 06:36:28 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:14.716 06:36:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.716 06:36:28 -- common/autotest_common.sh@10 -- # set +x 00:06:14.716 ************************************ 00:06:14.716 START TEST accel_decomp 00:06:14.716 ************************************ 00:06:14.716 06:36:28 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:14.716 06:36:28 -- accel/accel.sh@16 -- # local accel_opc 00:06:14.716 06:36:28 -- accel/accel.sh@17 -- # local accel_module 00:06:14.716 06:36:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:14.716 06:36:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:14.716 06:36:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.717 06:36:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.717 06:36:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.717 06:36:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.717 06:36:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.717 06:36:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.717 06:36:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.717 06:36:28 -- accel/accel.sh@42 -- # jq -r . 00:06:14.717 [2024-12-14 06:36:28.495463] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:14.717 [2024-12-14 06:36:28.495573] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56865 ] 00:06:14.717 [2024-12-14 06:36:28.624354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.717 [2024-12-14 06:36:28.672329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.095 06:36:29 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:16.095 00:06:16.095 SPDK Configuration: 00:06:16.095 Core mask: 0x1 00:06:16.095 00:06:16.095 Accel Perf Configuration: 00:06:16.095 Workload Type: decompress 00:06:16.095 Transfer size: 4096 bytes 00:06:16.095 Vector count 1 00:06:16.095 Module: software 00:06:16.095 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:16.095 Queue depth: 32 00:06:16.095 Allocate depth: 32 00:06:16.095 # threads/core: 1 00:06:16.095 Run time: 1 seconds 00:06:16.095 Verify: Yes 00:06:16.095 00:06:16.095 Running for 1 seconds... 00:06:16.095 00:06:16.095 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:16.095 ------------------------------------------------------------------------------------ 00:06:16.095 0,0 78624/s 144 MiB/s 0 0 00:06:16.095 ==================================================================================== 00:06:16.095 Total 78624/s 307 MiB/s 0 0' 00:06:16.095 06:36:29 -- accel/accel.sh@20 -- # IFS=: 00:06:16.095 06:36:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:16.095 06:36:29 -- accel/accel.sh@20 -- # read -r var val 00:06:16.095 06:36:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.095 06:36:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:16.095 06:36:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:16.096 06:36:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.096 06:36:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.096 06:36:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:16.096 06:36:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:16.096 06:36:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:16.096 06:36:29 -- accel/accel.sh@42 -- # jq -r . 00:06:16.096 [2024-12-14 06:36:29.865131] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:16.096 [2024-12-14 06:36:29.865252] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56879 ] 00:06:16.096 [2024-12-14 06:36:30.001175] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.096 [2024-12-14 06:36:30.055161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.355 06:36:30 -- accel/accel.sh@21 -- # val= 00:06:16.355 06:36:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.355 06:36:30 -- accel/accel.sh@21 -- # val= 00:06:16.355 06:36:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.355 06:36:30 -- accel/accel.sh@21 -- # val= 00:06:16.355 06:36:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.355 06:36:30 -- accel/accel.sh@21 -- # val=0x1 00:06:16.355 06:36:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.355 06:36:30 -- accel/accel.sh@21 -- # val= 00:06:16.355 06:36:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.355 06:36:30 -- accel/accel.sh@21 -- # val= 00:06:16.355 06:36:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.355 06:36:30 -- accel/accel.sh@21 -- # val=decompress 00:06:16.355 06:36:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.355 06:36:30 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.355 06:36:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:16.355 06:36:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.355 06:36:30 -- accel/accel.sh@21 -- # val= 00:06:16.355 06:36:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.355 06:36:30 -- accel/accel.sh@21 -- # val=software 00:06:16.355 06:36:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.355 06:36:30 -- accel/accel.sh@23 -- # accel_module=software 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.355 06:36:30 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:16.355 06:36:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.355 06:36:30 -- accel/accel.sh@21 -- # val=32 00:06:16.355 06:36:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.355 06:36:30 -- accel/accel.sh@21 -- # val=32 00:06:16.355 06:36:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.355 06:36:30 -- accel/accel.sh@21 -- # val=1 00:06:16.355 06:36:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.355 06:36:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:16.355 06:36:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.355 06:36:30 -- accel/accel.sh@21 -- # val=Yes 00:06:16.355 06:36:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.355 06:36:30 -- accel/accel.sh@21 -- # val= 00:06:16.355 06:36:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.355 06:36:30 -- accel/accel.sh@21 -- # val= 00:06:16.355 06:36:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.355 06:36:30 -- accel/accel.sh@20 -- # read -r var val 00:06:17.356 06:36:31 -- accel/accel.sh@21 -- # val= 00:06:17.356 06:36:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.356 06:36:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.356 06:36:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.356 06:36:31 -- accel/accel.sh@21 -- # val= 00:06:17.356 06:36:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.356 06:36:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.356 06:36:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.356 06:36:31 -- accel/accel.sh@21 -- # val= 00:06:17.356 06:36:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.356 06:36:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.356 06:36:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.356 06:36:31 -- accel/accel.sh@21 -- # val= 00:06:17.356 ************************************ 00:06:17.356 END TEST accel_decomp 00:06:17.356 ************************************ 00:06:17.356 06:36:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.356 06:36:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.356 06:36:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.356 06:36:31 -- accel/accel.sh@21 -- # val= 00:06:17.356 06:36:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.356 06:36:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.356 06:36:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.356 06:36:31 -- accel/accel.sh@21 -- # val= 00:06:17.356 06:36:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.356 06:36:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.356 06:36:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.356 06:36:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:17.356 06:36:31 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:17.356 06:36:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.356 00:06:17.356 real 0m2.763s 00:06:17.356 user 0m2.409s 00:06:17.356 sys 0m0.153s 00:06:17.356 06:36:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:17.356 06:36:31 -- common/autotest_common.sh@10 -- # set +x 00:06:17.356 06:36:31 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:17.356 06:36:31 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:17.356 06:36:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.356 06:36:31 -- common/autotest_common.sh@10 -- # set +x 00:06:17.356 ************************************ 00:06:17.356 START TEST accel_decmop_full 00:06:17.356 ************************************ 00:06:17.356 06:36:31 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:17.356 06:36:31 -- accel/accel.sh@16 -- # local accel_opc 00:06:17.356 06:36:31 -- accel/accel.sh@17 -- # local accel_module 00:06:17.356 06:36:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:17.356 06:36:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:17.356 06:36:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.356 06:36:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.356 06:36:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.356 06:36:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.356 06:36:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.356 06:36:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.356 06:36:31 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.356 06:36:31 -- accel/accel.sh@42 -- # jq -r . 00:06:17.356 [2024-12-14 06:36:31.312802] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:17.356 [2024-12-14 06:36:31.312950] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56908 ] 00:06:17.615 [2024-12-14 06:36:31.455136] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.615 [2024-12-14 06:36:31.509126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.993 06:36:32 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:18.993 00:06:18.993 SPDK Configuration: 00:06:18.993 Core mask: 0x1 00:06:18.993 00:06:18.993 Accel Perf Configuration: 00:06:18.993 Workload Type: decompress 00:06:18.993 Transfer size: 111250 bytes 00:06:18.993 Vector count 1 00:06:18.993 Module: software 00:06:18.993 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:18.993 Queue depth: 32 00:06:18.993 Allocate depth: 32 00:06:18.993 # threads/core: 1 00:06:18.993 Run time: 1 seconds 00:06:18.993 Verify: Yes 00:06:18.993 00:06:18.993 Running for 1 seconds... 00:06:18.993 00:06:18.993 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:18.993 ------------------------------------------------------------------------------------ 00:06:18.994 0,0 5088/s 210 MiB/s 0 0 00:06:18.994 ==================================================================================== 00:06:18.994 Total 5088/s 539 MiB/s 0 0' 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.994 06:36:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:18.994 06:36:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.994 06:36:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:18.994 06:36:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.994 06:36:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.994 06:36:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.994 06:36:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.994 06:36:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.994 06:36:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.994 06:36:32 -- accel/accel.sh@42 -- # jq -r . 00:06:18.994 [2024-12-14 06:36:32.714144] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:18.994 [2024-12-14 06:36:32.714548] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56933 ] 00:06:18.994 [2024-12-14 06:36:32.847925] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.994 [2024-12-14 06:36:32.898124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.994 06:36:32 -- accel/accel.sh@21 -- # val= 00:06:18.994 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.994 06:36:32 -- accel/accel.sh@21 -- # val= 00:06:18.994 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.994 06:36:32 -- accel/accel.sh@21 -- # val= 00:06:18.994 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.994 06:36:32 -- accel/accel.sh@21 -- # val=0x1 00:06:18.994 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.994 06:36:32 -- accel/accel.sh@21 -- # val= 00:06:18.994 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.994 06:36:32 -- accel/accel.sh@21 -- # val= 00:06:18.994 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.994 06:36:32 -- accel/accel.sh@21 -- # val=decompress 00:06:18.994 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.994 06:36:32 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.994 06:36:32 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:18.994 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.994 06:36:32 -- accel/accel.sh@21 -- # val= 00:06:18.994 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.994 06:36:32 -- accel/accel.sh@21 -- # val=software 00:06:18.994 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.994 06:36:32 -- accel/accel.sh@23 -- # accel_module=software 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.994 06:36:32 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:18.994 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.994 06:36:32 -- accel/accel.sh@21 -- # val=32 00:06:18.994 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.994 06:36:32 -- accel/accel.sh@21 -- # val=32 00:06:18.994 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.994 06:36:32 -- accel/accel.sh@21 -- # val=1 00:06:18.994 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.994 06:36:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:18.994 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.994 06:36:32 -- accel/accel.sh@21 -- # val=Yes 00:06:18.994 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.994 06:36:32 -- accel/accel.sh@21 -- # val= 00:06:18.994 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.994 06:36:32 -- accel/accel.sh@21 -- # val= 00:06:18.994 06:36:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.994 06:36:32 -- accel/accel.sh@20 -- # read -r var val 00:06:20.370 06:36:34 -- accel/accel.sh@21 -- # val= 00:06:20.370 06:36:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.370 06:36:34 -- accel/accel.sh@20 -- # IFS=: 00:06:20.370 06:36:34 -- accel/accel.sh@20 -- # read -r var val 00:06:20.370 06:36:34 -- accel/accel.sh@21 -- # val= 00:06:20.370 06:36:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.370 06:36:34 -- accel/accel.sh@20 -- # IFS=: 00:06:20.370 06:36:34 -- accel/accel.sh@20 -- # read -r var val 00:06:20.370 06:36:34 -- accel/accel.sh@21 -- # val= 00:06:20.370 06:36:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.370 06:36:34 -- accel/accel.sh@20 -- # IFS=: 00:06:20.370 06:36:34 -- accel/accel.sh@20 -- # read -r var val 00:06:20.370 06:36:34 -- accel/accel.sh@21 -- # val= 00:06:20.370 06:36:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.370 06:36:34 -- accel/accel.sh@20 -- # IFS=: 00:06:20.370 06:36:34 -- accel/accel.sh@20 -- # read -r var val 00:06:20.370 06:36:34 -- accel/accel.sh@21 -- # val= 00:06:20.370 06:36:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.370 06:36:34 -- accel/accel.sh@20 -- # IFS=: 00:06:20.370 06:36:34 -- accel/accel.sh@20 -- # read -r var val 00:06:20.370 06:36:34 -- accel/accel.sh@21 -- # val= 00:06:20.370 06:36:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.370 06:36:34 -- accel/accel.sh@20 -- # IFS=: 00:06:20.370 06:36:34 -- accel/accel.sh@20 -- # read -r var val 00:06:20.370 06:36:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:20.370 06:36:34 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:20.370 06:36:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.370 00:06:20.370 real 0m2.797s 00:06:20.370 user 0m2.429s 00:06:20.370 sys 0m0.161s 00:06:20.370 ************************************ 00:06:20.370 END TEST accel_decmop_full 00:06:20.370 ************************************ 00:06:20.370 06:36:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.371 06:36:34 -- common/autotest_common.sh@10 -- # set +x 00:06:20.371 06:36:34 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:20.371 06:36:34 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:20.371 06:36:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.371 06:36:34 -- common/autotest_common.sh@10 -- # set +x 00:06:20.371 ************************************ 00:06:20.371 START TEST accel_decomp_mcore 00:06:20.371 ************************************ 00:06:20.371 06:36:34 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:20.371 06:36:34 -- accel/accel.sh@16 -- # local accel_opc 00:06:20.371 06:36:34 -- accel/accel.sh@17 -- # local accel_module 00:06:20.371 06:36:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:20.371 06:36:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:20.371 06:36:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.371 06:36:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.371 06:36:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.371 06:36:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.371 06:36:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.371 06:36:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.371 06:36:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.371 06:36:34 -- accel/accel.sh@42 -- # jq -r . 00:06:20.371 [2024-12-14 06:36:34.163836] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:20.371 [2024-12-14 06:36:34.164029] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56962 ] 00:06:20.371 [2024-12-14 06:36:34.309645] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:20.629 [2024-12-14 06:36:34.366796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.629 [2024-12-14 06:36:34.367072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.629 [2024-12-14 06:36:34.366924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.629 [2024-12-14 06:36:34.367068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.568 06:36:35 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:21.568 00:06:21.568 SPDK Configuration: 00:06:21.568 Core mask: 0xf 00:06:21.568 00:06:21.568 Accel Perf Configuration: 00:06:21.568 Workload Type: decompress 00:06:21.568 Transfer size: 4096 bytes 00:06:21.568 Vector count 1 00:06:21.568 Module: software 00:06:21.568 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:21.568 Queue depth: 32 00:06:21.568 Allocate depth: 32 00:06:21.568 # threads/core: 1 00:06:21.568 Run time: 1 seconds 00:06:21.568 Verify: Yes 00:06:21.568 00:06:21.568 Running for 1 seconds... 00:06:21.568 00:06:21.568 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:21.568 ------------------------------------------------------------------------------------ 00:06:21.568 0,0 64576/s 118 MiB/s 0 0 00:06:21.568 3,0 61952/s 114 MiB/s 0 0 00:06:21.568 2,0 61632/s 113 MiB/s 0 0 00:06:21.568 1,0 63232/s 116 MiB/s 0 0 00:06:21.568 ==================================================================================== 00:06:21.568 Total 251392/s 982 MiB/s 0 0' 00:06:21.568 06:36:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:21.568 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.568 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.568 06:36:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:21.568 06:36:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.568 06:36:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.568 06:36:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.568 06:36:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.568 06:36:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.568 06:36:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.568 06:36:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.568 06:36:35 -- accel/accel.sh@42 -- # jq -r . 00:06:21.568 [2024-12-14 06:36:35.553883] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:21.568 [2024-12-14 06:36:35.554028] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56985 ] 00:06:21.827 [2024-12-14 06:36:35.686703] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:21.827 [2024-12-14 06:36:35.741461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.827 [2024-12-14 06:36:35.741594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.827 [2024-12-14 06:36:35.741707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.827 [2024-12-14 06:36:35.741711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.827 06:36:35 -- accel/accel.sh@21 -- # val= 00:06:21.827 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.827 06:36:35 -- accel/accel.sh@21 -- # val= 00:06:21.827 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.827 06:36:35 -- accel/accel.sh@21 -- # val= 00:06:21.827 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.827 06:36:35 -- accel/accel.sh@21 -- # val=0xf 00:06:21.827 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.827 06:36:35 -- accel/accel.sh@21 -- # val= 00:06:21.827 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.827 06:36:35 -- accel/accel.sh@21 -- # val= 00:06:21.827 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.827 06:36:35 -- accel/accel.sh@21 -- # val=decompress 00:06:21.827 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.827 06:36:35 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.827 06:36:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:21.827 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.827 06:36:35 -- accel/accel.sh@21 -- # val= 00:06:21.827 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.827 06:36:35 -- accel/accel.sh@21 -- # val=software 00:06:21.827 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.827 06:36:35 -- accel/accel.sh@23 -- # accel_module=software 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.827 06:36:35 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:21.827 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.827 06:36:35 -- accel/accel.sh@21 -- # val=32 00:06:21.827 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.827 06:36:35 -- accel/accel.sh@21 -- # val=32 00:06:21.827 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.827 06:36:35 -- accel/accel.sh@21 -- # val=1 00:06:21.827 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.827 06:36:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:21.827 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.827 06:36:35 -- accel/accel.sh@21 -- # val=Yes 00:06:21.827 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.827 06:36:35 -- accel/accel.sh@21 -- # val= 00:06:21.827 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.827 06:36:35 -- accel/accel.sh@21 -- # val= 00:06:21.827 06:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.827 06:36:35 -- accel/accel.sh@20 -- # read -r var val 00:06:23.204 06:36:36 -- accel/accel.sh@21 -- # val= 00:06:23.204 06:36:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.204 06:36:36 -- accel/accel.sh@20 -- # IFS=: 00:06:23.204 06:36:36 -- accel/accel.sh@20 -- # read -r var val 00:06:23.204 06:36:36 -- accel/accel.sh@21 -- # val= 00:06:23.204 06:36:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.204 06:36:36 -- accel/accel.sh@20 -- # IFS=: 00:06:23.204 06:36:36 -- accel/accel.sh@20 -- # read -r var val 00:06:23.204 06:36:36 -- accel/accel.sh@21 -- # val= 00:06:23.204 06:36:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.204 06:36:36 -- accel/accel.sh@20 -- # IFS=: 00:06:23.204 06:36:36 -- accel/accel.sh@20 -- # read -r var val 00:06:23.204 06:36:36 -- accel/accel.sh@21 -- # val= 00:06:23.204 06:36:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.204 06:36:36 -- accel/accel.sh@20 -- # IFS=: 00:06:23.204 06:36:36 -- accel/accel.sh@20 -- # read -r var val 00:06:23.204 06:36:36 -- accel/accel.sh@21 -- # val= 00:06:23.204 06:36:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.204 06:36:36 -- accel/accel.sh@20 -- # IFS=: 00:06:23.204 06:36:36 -- accel/accel.sh@20 -- # read -r var val 00:06:23.204 06:36:36 -- accel/accel.sh@21 -- # val= 00:06:23.204 06:36:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.204 06:36:36 -- accel/accel.sh@20 -- # IFS=: 00:06:23.204 06:36:36 -- accel/accel.sh@20 -- # read -r var val 00:06:23.204 06:36:36 -- accel/accel.sh@21 -- # val= 00:06:23.204 06:36:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.204 06:36:36 -- accel/accel.sh@20 -- # IFS=: 00:06:23.204 ************************************ 00:06:23.204 END TEST accel_decomp_mcore 00:06:23.204 ************************************ 00:06:23.204 06:36:36 -- accel/accel.sh@20 -- # read -r var val 00:06:23.204 06:36:36 -- accel/accel.sh@21 -- # val= 00:06:23.204 06:36:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.204 06:36:36 -- accel/accel.sh@20 -- # IFS=: 00:06:23.204 06:36:36 -- accel/accel.sh@20 -- # read -r var val 00:06:23.204 06:36:36 -- accel/accel.sh@21 -- # val= 00:06:23.204 06:36:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.204 06:36:36 -- accel/accel.sh@20 -- # IFS=: 00:06:23.204 06:36:36 -- accel/accel.sh@20 -- # read -r var val 00:06:23.205 06:36:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:23.205 06:36:36 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:23.205 06:36:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.205 00:06:23.205 real 0m2.790s 00:06:23.205 user 0m8.841s 00:06:23.205 sys 0m0.177s 00:06:23.205 06:36:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:23.205 06:36:36 -- common/autotest_common.sh@10 -- # set +x 00:06:23.205 06:36:36 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:23.205 06:36:36 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:23.205 06:36:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.205 06:36:36 -- common/autotest_common.sh@10 -- # set +x 00:06:23.205 ************************************ 00:06:23.205 START TEST accel_decomp_full_mcore 00:06:23.205 ************************************ 00:06:23.205 06:36:36 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:23.205 06:36:36 -- accel/accel.sh@16 -- # local accel_opc 00:06:23.205 06:36:36 -- accel/accel.sh@17 -- # local accel_module 00:06:23.205 06:36:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:23.205 06:36:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:23.205 06:36:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.205 06:36:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.205 06:36:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.205 06:36:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.205 06:36:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.205 06:36:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.205 06:36:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.205 06:36:36 -- accel/accel.sh@42 -- # jq -r . 00:06:23.205 [2024-12-14 06:36:36.997236] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:23.205 [2024-12-14 06:36:36.997722] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57022 ] 00:06:23.205 [2024-12-14 06:36:37.142966] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:23.463 [2024-12-14 06:36:37.195677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.463 [2024-12-14 06:36:37.195811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.463 [2024-12-14 06:36:37.195913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.463 [2024-12-14 06:36:37.195916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.840 06:36:38 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:24.840 00:06:24.840 SPDK Configuration: 00:06:24.840 Core mask: 0xf 00:06:24.840 00:06:24.840 Accel Perf Configuration: 00:06:24.840 Workload Type: decompress 00:06:24.840 Transfer size: 111250 bytes 00:06:24.840 Vector count 1 00:06:24.840 Module: software 00:06:24.840 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:24.840 Queue depth: 32 00:06:24.840 Allocate depth: 32 00:06:24.840 # threads/core: 1 00:06:24.840 Run time: 1 seconds 00:06:24.840 Verify: Yes 00:06:24.840 00:06:24.840 Running for 1 seconds... 00:06:24.840 00:06:24.840 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:24.840 ------------------------------------------------------------------------------------ 00:06:24.840 0,0 4512/s 186 MiB/s 0 0 00:06:24.840 3,0 4512/s 186 MiB/s 0 0 00:06:24.840 2,0 3936/s 162 MiB/s 0 0 00:06:24.840 1,0 4384/s 181 MiB/s 0 0 00:06:24.840 ==================================================================================== 00:06:24.840 Total 17344/s 1840 MiB/s 0 0' 00:06:24.840 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:24.840 06:36:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:24.840 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:24.840 06:36:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.840 06:36:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:24.840 06:36:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.840 06:36:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.840 06:36:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.840 06:36:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.840 06:36:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.840 06:36:38 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.840 06:36:38 -- accel/accel.sh@42 -- # jq -r . 00:06:24.840 [2024-12-14 06:36:38.423472] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:24.840 [2024-12-14 06:36:38.423562] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57039 ] 00:06:24.841 [2024-12-14 06:36:38.561366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:24.841 [2024-12-14 06:36:38.613534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.841 [2024-12-14 06:36:38.613689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.841 [2024-12-14 06:36:38.613823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.841 [2024-12-14 06:36:38.614099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.841 06:36:38 -- accel/accel.sh@21 -- # val= 00:06:24.841 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:24.841 06:36:38 -- accel/accel.sh@21 -- # val= 00:06:24.841 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:24.841 06:36:38 -- accel/accel.sh@21 -- # val= 00:06:24.841 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:24.841 06:36:38 -- accel/accel.sh@21 -- # val=0xf 00:06:24.841 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:24.841 06:36:38 -- accel/accel.sh@21 -- # val= 00:06:24.841 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:24.841 06:36:38 -- accel/accel.sh@21 -- # val= 00:06:24.841 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:24.841 06:36:38 -- accel/accel.sh@21 -- # val=decompress 00:06:24.841 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.841 06:36:38 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:24.841 06:36:38 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:24.841 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:24.841 06:36:38 -- accel/accel.sh@21 -- # val= 00:06:24.841 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:24.841 06:36:38 -- accel/accel.sh@21 -- # val=software 00:06:24.841 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.841 06:36:38 -- accel/accel.sh@23 -- # accel_module=software 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:24.841 06:36:38 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:24.841 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:24.841 06:36:38 -- accel/accel.sh@21 -- # val=32 00:06:24.841 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:24.841 06:36:38 -- accel/accel.sh@21 -- # val=32 00:06:24.841 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:24.841 06:36:38 -- accel/accel.sh@21 -- # val=1 00:06:24.841 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:24.841 06:36:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:24.841 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:24.841 06:36:38 -- accel/accel.sh@21 -- # val=Yes 00:06:24.841 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:24.841 06:36:38 -- accel/accel.sh@21 -- # val= 00:06:24.841 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:24.841 06:36:38 -- accel/accel.sh@21 -- # val= 00:06:24.841 06:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # IFS=: 00:06:24.841 06:36:38 -- accel/accel.sh@20 -- # read -r var val 00:06:26.219 06:36:39 -- accel/accel.sh@21 -- # val= 00:06:26.219 06:36:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.219 06:36:39 -- accel/accel.sh@20 -- # IFS=: 00:06:26.219 06:36:39 -- accel/accel.sh@20 -- # read -r var val 00:06:26.219 06:36:39 -- accel/accel.sh@21 -- # val= 00:06:26.219 06:36:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.219 06:36:39 -- accel/accel.sh@20 -- # IFS=: 00:06:26.219 06:36:39 -- accel/accel.sh@20 -- # read -r var val 00:06:26.219 06:36:39 -- accel/accel.sh@21 -- # val= 00:06:26.219 06:36:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.219 06:36:39 -- accel/accel.sh@20 -- # IFS=: 00:06:26.219 06:36:39 -- accel/accel.sh@20 -- # read -r var val 00:06:26.219 06:36:39 -- accel/accel.sh@21 -- # val= 00:06:26.219 06:36:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.219 06:36:39 -- accel/accel.sh@20 -- # IFS=: 00:06:26.219 06:36:39 -- accel/accel.sh@20 -- # read -r var val 00:06:26.219 06:36:39 -- accel/accel.sh@21 -- # val= 00:06:26.219 06:36:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.219 06:36:39 -- accel/accel.sh@20 -- # IFS=: 00:06:26.219 06:36:39 -- accel/accel.sh@20 -- # read -r var val 00:06:26.219 06:36:39 -- accel/accel.sh@21 -- # val= 00:06:26.219 06:36:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.219 06:36:39 -- accel/accel.sh@20 -- # IFS=: 00:06:26.219 06:36:39 -- accel/accel.sh@20 -- # read -r var val 00:06:26.219 06:36:39 -- accel/accel.sh@21 -- # val= 00:06:26.219 06:36:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.219 06:36:39 -- accel/accel.sh@20 -- # IFS=: 00:06:26.219 06:36:39 -- accel/accel.sh@20 -- # read -r var val 00:06:26.219 06:36:39 -- accel/accel.sh@21 -- # val= 00:06:26.219 06:36:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.219 06:36:39 -- accel/accel.sh@20 -- # IFS=: 00:06:26.219 06:36:39 -- accel/accel.sh@20 -- # read -r var val 00:06:26.219 06:36:39 -- accel/accel.sh@21 -- # val= 00:06:26.219 06:36:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.219 06:36:39 -- accel/accel.sh@20 -- # IFS=: 00:06:26.219 06:36:39 -- accel/accel.sh@20 -- # read -r var val 00:06:26.219 06:36:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:26.219 06:36:39 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:26.219 06:36:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.219 00:06:26.219 real 0m2.849s 00:06:26.219 user 0m9.003s 00:06:26.219 sys 0m0.174s 00:06:26.219 06:36:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:26.219 06:36:39 -- common/autotest_common.sh@10 -- # set +x 00:06:26.219 ************************************ 00:06:26.219 END TEST accel_decomp_full_mcore 00:06:26.219 ************************************ 00:06:26.219 06:36:39 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:26.219 06:36:39 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:26.219 06:36:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.219 06:36:39 -- common/autotest_common.sh@10 -- # set +x 00:06:26.219 ************************************ 00:06:26.219 START TEST accel_decomp_mthread 00:06:26.219 ************************************ 00:06:26.219 06:36:39 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:26.219 06:36:39 -- accel/accel.sh@16 -- # local accel_opc 00:06:26.219 06:36:39 -- accel/accel.sh@17 -- # local accel_module 00:06:26.219 06:36:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:26.219 06:36:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:26.219 06:36:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.219 06:36:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.219 06:36:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.219 06:36:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.219 06:36:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.219 06:36:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.219 06:36:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.219 06:36:39 -- accel/accel.sh@42 -- # jq -r . 00:06:26.219 [2024-12-14 06:36:39.895351] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.219 [2024-12-14 06:36:39.895624] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57081 ] 00:06:26.219 [2024-12-14 06:36:40.034546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.219 [2024-12-14 06:36:40.081472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.631 06:36:41 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:27.631 00:06:27.631 SPDK Configuration: 00:06:27.631 Core mask: 0x1 00:06:27.631 00:06:27.631 Accel Perf Configuration: 00:06:27.631 Workload Type: decompress 00:06:27.631 Transfer size: 4096 bytes 00:06:27.631 Vector count 1 00:06:27.631 Module: software 00:06:27.631 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:27.631 Queue depth: 32 00:06:27.631 Allocate depth: 32 00:06:27.631 # threads/core: 2 00:06:27.631 Run time: 1 seconds 00:06:27.631 Verify: Yes 00:06:27.631 00:06:27.631 Running for 1 seconds... 00:06:27.631 00:06:27.631 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:27.631 ------------------------------------------------------------------------------------ 00:06:27.631 0,1 39360/s 72 MiB/s 0 0 00:06:27.631 0,0 39296/s 72 MiB/s 0 0 00:06:27.631 ==================================================================================== 00:06:27.631 Total 78656/s 307 MiB/s 0 0' 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.631 06:36:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:27.631 06:36:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.631 06:36:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:27.631 06:36:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.631 06:36:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.631 06:36:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.631 06:36:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.631 06:36:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.631 06:36:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.631 06:36:41 -- accel/accel.sh@42 -- # jq -r . 00:06:27.631 [2024-12-14 06:36:41.281986] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:27.631 [2024-12-14 06:36:41.282270] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57096 ] 00:06:27.631 [2024-12-14 06:36:41.414493] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.631 [2024-12-14 06:36:41.469686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.631 06:36:41 -- accel/accel.sh@21 -- # val= 00:06:27.631 06:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.631 06:36:41 -- accel/accel.sh@21 -- # val= 00:06:27.631 06:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.631 06:36:41 -- accel/accel.sh@21 -- # val= 00:06:27.631 06:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.631 06:36:41 -- accel/accel.sh@21 -- # val=0x1 00:06:27.631 06:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.631 06:36:41 -- accel/accel.sh@21 -- # val= 00:06:27.631 06:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.631 06:36:41 -- accel/accel.sh@21 -- # val= 00:06:27.631 06:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.631 06:36:41 -- accel/accel.sh@21 -- # val=decompress 00:06:27.631 06:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.631 06:36:41 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.631 06:36:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:27.631 06:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.631 06:36:41 -- accel/accel.sh@21 -- # val= 00:06:27.631 06:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.631 06:36:41 -- accel/accel.sh@21 -- # val=software 00:06:27.631 06:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.631 06:36:41 -- accel/accel.sh@23 -- # accel_module=software 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.631 06:36:41 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:27.631 06:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.631 06:36:41 -- accel/accel.sh@21 -- # val=32 00:06:27.631 06:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.631 06:36:41 -- accel/accel.sh@21 -- # val=32 00:06:27.631 06:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.631 06:36:41 -- accel/accel.sh@21 -- # val=2 00:06:27.631 06:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.631 06:36:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:27.631 06:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.631 06:36:41 -- accel/accel.sh@21 -- # val=Yes 00:06:27.631 06:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.631 06:36:41 -- accel/accel.sh@21 -- # val= 00:06:27.631 06:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.631 06:36:41 -- accel/accel.sh@21 -- # val= 00:06:27.631 06:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.631 06:36:41 -- accel/accel.sh@20 -- # read -r var val 00:06:29.009 06:36:42 -- accel/accel.sh@21 -- # val= 00:06:29.009 06:36:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.009 06:36:42 -- accel/accel.sh@20 -- # IFS=: 00:06:29.009 06:36:42 -- accel/accel.sh@20 -- # read -r var val 00:06:29.009 06:36:42 -- accel/accel.sh@21 -- # val= 00:06:29.009 06:36:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.009 06:36:42 -- accel/accel.sh@20 -- # IFS=: 00:06:29.009 06:36:42 -- accel/accel.sh@20 -- # read -r var val 00:06:29.009 06:36:42 -- accel/accel.sh@21 -- # val= 00:06:29.009 06:36:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.009 06:36:42 -- accel/accel.sh@20 -- # IFS=: 00:06:29.009 06:36:42 -- accel/accel.sh@20 -- # read -r var val 00:06:29.009 06:36:42 -- accel/accel.sh@21 -- # val= 00:06:29.009 06:36:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.009 06:36:42 -- accel/accel.sh@20 -- # IFS=: 00:06:29.009 06:36:42 -- accel/accel.sh@20 -- # read -r var val 00:06:29.009 06:36:42 -- accel/accel.sh@21 -- # val= 00:06:29.009 06:36:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.009 06:36:42 -- accel/accel.sh@20 -- # IFS=: 00:06:29.009 06:36:42 -- accel/accel.sh@20 -- # read -r var val 00:06:29.009 06:36:42 -- accel/accel.sh@21 -- # val= 00:06:29.009 06:36:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.009 06:36:42 -- accel/accel.sh@20 -- # IFS=: 00:06:29.009 06:36:42 -- accel/accel.sh@20 -- # read -r var val 00:06:29.009 06:36:42 -- accel/accel.sh@21 -- # val= 00:06:29.009 06:36:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.009 06:36:42 -- accel/accel.sh@20 -- # IFS=: 00:06:29.009 06:36:42 -- accel/accel.sh@20 -- # read -r var val 00:06:29.009 06:36:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:29.009 06:36:42 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:29.009 06:36:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.009 00:06:29.009 real 0m2.784s 00:06:29.009 user 0m2.421s 00:06:29.009 sys 0m0.159s 00:06:29.009 06:36:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.009 06:36:42 -- common/autotest_common.sh@10 -- # set +x 00:06:29.009 ************************************ 00:06:29.009 END TEST accel_decomp_mthread 00:06:29.009 ************************************ 00:06:29.009 06:36:42 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:29.009 06:36:42 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:29.009 06:36:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.009 06:36:42 -- common/autotest_common.sh@10 -- # set +x 00:06:29.009 ************************************ 00:06:29.009 START TEST accel_deomp_full_mthread 00:06:29.009 ************************************ 00:06:29.009 06:36:42 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:29.009 06:36:42 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.009 06:36:42 -- accel/accel.sh@17 -- # local accel_module 00:06:29.009 06:36:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:29.009 06:36:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:29.009 06:36:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.009 06:36:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.009 06:36:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.009 06:36:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.009 06:36:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.009 06:36:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.009 06:36:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.009 06:36:42 -- accel/accel.sh@42 -- # jq -r . 00:06:29.009 [2024-12-14 06:36:42.728773] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.009 [2024-12-14 06:36:42.728867] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57131 ] 00:06:29.009 [2024-12-14 06:36:42.862521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.009 [2024-12-14 06:36:42.916946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.385 06:36:44 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:30.385 00:06:30.385 SPDK Configuration: 00:06:30.385 Core mask: 0x1 00:06:30.385 00:06:30.385 Accel Perf Configuration: 00:06:30.385 Workload Type: decompress 00:06:30.385 Transfer size: 111250 bytes 00:06:30.385 Vector count 1 00:06:30.386 Module: software 00:06:30.386 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:30.386 Queue depth: 32 00:06:30.386 Allocate depth: 32 00:06:30.386 # threads/core: 2 00:06:30.386 Run time: 1 seconds 00:06:30.386 Verify: Yes 00:06:30.386 00:06:30.386 Running for 1 seconds... 00:06:30.386 00:06:30.386 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:30.386 ------------------------------------------------------------------------------------ 00:06:30.386 0,1 2432/s 100 MiB/s 0 0 00:06:30.386 0,0 2400/s 99 MiB/s 0 0 00:06:30.386 ==================================================================================== 00:06:30.386 Total 4832/s 512 MiB/s 0 0' 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.386 06:36:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.386 06:36:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.386 06:36:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:30.386 06:36:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.386 06:36:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.386 06:36:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.386 06:36:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.386 06:36:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.386 06:36:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.386 06:36:44 -- accel/accel.sh@42 -- # jq -r . 00:06:30.386 [2024-12-14 06:36:44.118087] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:30.386 [2024-12-14 06:36:44.118320] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57148 ] 00:06:30.386 [2024-12-14 06:36:44.252565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.386 [2024-12-14 06:36:44.304382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.386 06:36:44 -- accel/accel.sh@21 -- # val= 00:06:30.386 06:36:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.386 06:36:44 -- accel/accel.sh@21 -- # val= 00:06:30.386 06:36:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.386 06:36:44 -- accel/accel.sh@21 -- # val= 00:06:30.386 06:36:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.386 06:36:44 -- accel/accel.sh@21 -- # val=0x1 00:06:30.386 06:36:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.386 06:36:44 -- accel/accel.sh@21 -- # val= 00:06:30.386 06:36:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.386 06:36:44 -- accel/accel.sh@21 -- # val= 00:06:30.386 06:36:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.386 06:36:44 -- accel/accel.sh@21 -- # val=decompress 00:06:30.386 06:36:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.386 06:36:44 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.386 06:36:44 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:30.386 06:36:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.386 06:36:44 -- accel/accel.sh@21 -- # val= 00:06:30.386 06:36:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.386 06:36:44 -- accel/accel.sh@21 -- # val=software 00:06:30.386 06:36:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.386 06:36:44 -- accel/accel.sh@23 -- # accel_module=software 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.386 06:36:44 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:30.386 06:36:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.386 06:36:44 -- accel/accel.sh@21 -- # val=32 00:06:30.386 06:36:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.386 06:36:44 -- accel/accel.sh@21 -- # val=32 00:06:30.386 06:36:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.386 06:36:44 -- accel/accel.sh@21 -- # val=2 00:06:30.386 06:36:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.386 06:36:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:30.386 06:36:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.386 06:36:44 -- accel/accel.sh@21 -- # val=Yes 00:06:30.386 06:36:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.386 06:36:44 -- accel/accel.sh@21 -- # val= 00:06:30.386 06:36:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.386 06:36:44 -- accel/accel.sh@21 -- # val= 00:06:30.386 06:36:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.386 06:36:44 -- accel/accel.sh@20 -- # read -r var val 00:06:31.767 06:36:45 -- accel/accel.sh@21 -- # val= 00:06:31.767 06:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.767 06:36:45 -- accel/accel.sh@20 -- # IFS=: 00:06:31.767 06:36:45 -- accel/accel.sh@20 -- # read -r var val 00:06:31.767 06:36:45 -- accel/accel.sh@21 -- # val= 00:06:31.767 06:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.767 06:36:45 -- accel/accel.sh@20 -- # IFS=: 00:06:31.767 06:36:45 -- accel/accel.sh@20 -- # read -r var val 00:06:31.767 06:36:45 -- accel/accel.sh@21 -- # val= 00:06:31.767 06:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.767 06:36:45 -- accel/accel.sh@20 -- # IFS=: 00:06:31.767 06:36:45 -- accel/accel.sh@20 -- # read -r var val 00:06:31.767 06:36:45 -- accel/accel.sh@21 -- # val= 00:06:31.767 06:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.767 06:36:45 -- accel/accel.sh@20 -- # IFS=: 00:06:31.767 06:36:45 -- accel/accel.sh@20 -- # read -r var val 00:06:31.767 06:36:45 -- accel/accel.sh@21 -- # val= 00:06:31.767 06:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.767 06:36:45 -- accel/accel.sh@20 -- # IFS=: 00:06:31.767 06:36:45 -- accel/accel.sh@20 -- # read -r var val 00:06:31.767 06:36:45 -- accel/accel.sh@21 -- # val= 00:06:31.767 06:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.767 06:36:45 -- accel/accel.sh@20 -- # IFS=: 00:06:31.767 06:36:45 -- accel/accel.sh@20 -- # read -r var val 00:06:31.767 06:36:45 -- accel/accel.sh@21 -- # val= 00:06:31.767 06:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.767 06:36:45 -- accel/accel.sh@20 -- # IFS=: 00:06:31.767 06:36:45 -- accel/accel.sh@20 -- # read -r var val 00:06:31.767 06:36:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:31.767 06:36:45 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:31.767 06:36:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.767 00:06:31.767 real 0m2.785s 00:06:31.767 user 0m2.443s 00:06:31.767 sys 0m0.141s 00:06:31.767 06:36:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:31.767 06:36:45 -- common/autotest_common.sh@10 -- # set +x 00:06:31.767 ************************************ 00:06:31.767 END TEST accel_deomp_full_mthread 00:06:31.767 ************************************ 00:06:31.767 06:36:45 -- accel/accel.sh@116 -- # [[ n == y ]] 00:06:31.767 06:36:45 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:31.767 06:36:45 -- accel/accel.sh@129 -- # build_accel_config 00:06:31.767 06:36:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.767 06:36:45 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:31.767 06:36:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.767 06:36:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.767 06:36:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.767 06:36:45 -- common/autotest_common.sh@10 -- # set +x 00:06:31.767 06:36:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.767 06:36:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.767 06:36:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.767 06:36:45 -- accel/accel.sh@42 -- # jq -r . 00:06:31.767 ************************************ 00:06:31.767 START TEST accel_dif_functional_tests 00:06:31.767 ************************************ 00:06:31.767 06:36:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:31.767 [2024-12-14 06:36:45.594245] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:31.767 [2024-12-14 06:36:45.594334] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57182 ] 00:06:31.767 [2024-12-14 06:36:45.730777] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.026 [2024-12-14 06:36:45.783927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.026 [2024-12-14 06:36:45.784045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.026 [2024-12-14 06:36:45.784048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.026 00:06:32.026 00:06:32.026 CUnit - A unit testing framework for C - Version 2.1-3 00:06:32.026 http://cunit.sourceforge.net/ 00:06:32.026 00:06:32.026 00:06:32.026 Suite: accel_dif 00:06:32.026 Test: verify: DIF generated, GUARD check ...passed 00:06:32.026 Test: verify: DIF generated, APPTAG check ...passed 00:06:32.026 Test: verify: DIF generated, REFTAG check ...passed 00:06:32.026 Test: verify: DIF not generated, GUARD check ...passed 00:06:32.026 Test: verify: DIF not generated, APPTAG check ...[2024-12-14 06:36:45.834195] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:32.026 [2024-12-14 06:36:45.834303] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:32.026 [2024-12-14 06:36:45.834360] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:32.026 passed 00:06:32.026 Test: verify: DIF not generated, REFTAG check ...passed 00:06:32.026 Test: verify: APPTAG correct, APPTAG check ...[2024-12-14 06:36:45.834387] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:32.026 [2024-12-14 06:36:45.834410] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:32.026 [2024-12-14 06:36:45.834468] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:32.026 passed 00:06:32.026 Test: verify: APPTAG incorrect, APPTAG check ...[2024-12-14 06:36:45.834652] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:32.026 passed 00:06:32.026 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:32.026 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:32.026 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:32.026 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-12-14 06:36:45.834958] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:32.026 passed 00:06:32.026 Test: generate copy: DIF generated, GUARD check ...passed 00:06:32.026 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:32.026 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:32.026 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:32.026 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:32.026 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:32.026 Test: generate copy: iovecs-len validate ...passed 00:06:32.026 Test: generate copy: buffer alignment validate ...[2024-12-14 06:36:45.835481] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:32.026 passed 00:06:32.026 00:06:32.026 Run Summary: Type Total Ran Passed Failed Inactive 00:06:32.026 suites 1 1 n/a 0 0 00:06:32.026 tests 20 20 20 0 0 00:06:32.026 asserts 204 204 204 0 n/a 00:06:32.026 00:06:32.026 Elapsed time = 0.003 seconds 00:06:32.026 00:06:32.026 real 0m0.459s 00:06:32.026 user 0m0.511s 00:06:32.027 sys 0m0.102s 00:06:32.027 06:36:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:32.027 06:36:46 -- common/autotest_common.sh@10 -- # set +x 00:06:32.027 ************************************ 00:06:32.027 END TEST accel_dif_functional_tests 00:06:32.027 ************************************ 00:06:32.286 00:06:32.286 real 0m59.210s 00:06:32.286 user 1m4.353s 00:06:32.286 sys 0m4.270s 00:06:32.286 06:36:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:32.286 06:36:46 -- common/autotest_common.sh@10 -- # set +x 00:06:32.286 ************************************ 00:06:32.286 END TEST accel 00:06:32.286 ************************************ 00:06:32.286 06:36:46 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:32.286 06:36:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:32.286 06:36:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.286 06:36:46 -- common/autotest_common.sh@10 -- # set +x 00:06:32.286 ************************************ 00:06:32.286 START TEST accel_rpc 00:06:32.286 ************************************ 00:06:32.286 06:36:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:32.286 * Looking for test storage... 00:06:32.286 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:32.286 06:36:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:32.286 06:36:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:32.286 06:36:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:32.286 06:36:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:32.286 06:36:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:32.286 06:36:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:32.286 06:36:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:32.286 06:36:46 -- scripts/common.sh@335 -- # IFS=.-: 00:06:32.286 06:36:46 -- scripts/common.sh@335 -- # read -ra ver1 00:06:32.286 06:36:46 -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.286 06:36:46 -- scripts/common.sh@336 -- # read -ra ver2 00:06:32.286 06:36:46 -- scripts/common.sh@337 -- # local 'op=<' 00:06:32.286 06:36:46 -- scripts/common.sh@339 -- # ver1_l=2 00:06:32.286 06:36:46 -- scripts/common.sh@340 -- # ver2_l=1 00:06:32.286 06:36:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:32.286 06:36:46 -- scripts/common.sh@343 -- # case "$op" in 00:06:32.286 06:36:46 -- scripts/common.sh@344 -- # : 1 00:06:32.286 06:36:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:32.286 06:36:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.286 06:36:46 -- scripts/common.sh@364 -- # decimal 1 00:06:32.286 06:36:46 -- scripts/common.sh@352 -- # local d=1 00:06:32.286 06:36:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.286 06:36:46 -- scripts/common.sh@354 -- # echo 1 00:06:32.286 06:36:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:32.286 06:36:46 -- scripts/common.sh@365 -- # decimal 2 00:06:32.286 06:36:46 -- scripts/common.sh@352 -- # local d=2 00:06:32.286 06:36:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.286 06:36:46 -- scripts/common.sh@354 -- # echo 2 00:06:32.286 06:36:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:32.286 06:36:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:32.286 06:36:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:32.286 06:36:46 -- scripts/common.sh@367 -- # return 0 00:06:32.286 06:36:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.286 06:36:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:32.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.286 --rc genhtml_branch_coverage=1 00:06:32.286 --rc genhtml_function_coverage=1 00:06:32.286 --rc genhtml_legend=1 00:06:32.286 --rc geninfo_all_blocks=1 00:06:32.286 --rc geninfo_unexecuted_blocks=1 00:06:32.286 00:06:32.286 ' 00:06:32.286 06:36:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:32.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.286 --rc genhtml_branch_coverage=1 00:06:32.286 --rc genhtml_function_coverage=1 00:06:32.286 --rc genhtml_legend=1 00:06:32.286 --rc geninfo_all_blocks=1 00:06:32.286 --rc geninfo_unexecuted_blocks=1 00:06:32.286 00:06:32.286 ' 00:06:32.286 06:36:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:32.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.286 --rc genhtml_branch_coverage=1 00:06:32.286 --rc genhtml_function_coverage=1 00:06:32.286 --rc genhtml_legend=1 00:06:32.286 --rc geninfo_all_blocks=1 00:06:32.286 --rc geninfo_unexecuted_blocks=1 00:06:32.286 00:06:32.286 ' 00:06:32.286 06:36:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:32.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.286 --rc genhtml_branch_coverage=1 00:06:32.286 --rc genhtml_function_coverage=1 00:06:32.286 --rc genhtml_legend=1 00:06:32.286 --rc geninfo_all_blocks=1 00:06:32.286 --rc geninfo_unexecuted_blocks=1 00:06:32.286 00:06:32.286 ' 00:06:32.286 06:36:46 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:32.286 06:36:46 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=57259 00:06:32.286 06:36:46 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:32.286 06:36:46 -- accel/accel_rpc.sh@15 -- # waitforlisten 57259 00:06:32.286 06:36:46 -- common/autotest_common.sh@829 -- # '[' -z 57259 ']' 00:06:32.286 06:36:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.286 06:36:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.286 06:36:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.286 06:36:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.286 06:36:46 -- common/autotest_common.sh@10 -- # set +x 00:06:32.545 [2024-12-14 06:36:46.330060] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:32.545 [2024-12-14 06:36:46.330167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57259 ] 00:06:32.545 [2024-12-14 06:36:46.466948] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.545 [2024-12-14 06:36:46.520940] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:32.545 [2024-12-14 06:36:46.521153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.804 06:36:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.804 06:36:46 -- common/autotest_common.sh@862 -- # return 0 00:06:32.804 06:36:46 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:32.804 06:36:46 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:32.804 06:36:46 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:32.804 06:36:46 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:32.804 06:36:46 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:32.804 06:36:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:32.804 06:36:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.804 06:36:46 -- common/autotest_common.sh@10 -- # set +x 00:06:32.804 ************************************ 00:06:32.804 START TEST accel_assign_opcode 00:06:32.804 ************************************ 00:06:32.804 06:36:46 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:06:32.804 06:36:46 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:32.804 06:36:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.804 06:36:46 -- common/autotest_common.sh@10 -- # set +x 00:06:32.804 [2024-12-14 06:36:46.581631] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:32.804 06:36:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.804 06:36:46 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:32.804 06:36:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.804 06:36:46 -- common/autotest_common.sh@10 -- # set +x 00:06:32.804 [2024-12-14 06:36:46.589627] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:32.804 06:36:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.804 06:36:46 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:32.804 06:36:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.804 06:36:46 -- common/autotest_common.sh@10 -- # set +x 00:06:32.804 06:36:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.804 06:36:46 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:32.804 06:36:46 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:32.804 06:36:46 -- accel/accel_rpc.sh@42 -- # grep software 00:06:32.804 06:36:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.804 06:36:46 -- common/autotest_common.sh@10 -- # set +x 00:06:32.804 06:36:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.804 software 00:06:32.804 00:06:32.804 real 0m0.193s 00:06:32.804 user 0m0.054s 00:06:32.804 sys 0m0.011s 00:06:32.804 06:36:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:32.804 06:36:46 -- common/autotest_common.sh@10 -- # set +x 00:06:32.804 ************************************ 00:06:32.804 END TEST accel_assign_opcode 00:06:32.804 ************************************ 00:06:33.063 06:36:46 -- accel/accel_rpc.sh@55 -- # killprocess 57259 00:06:33.063 06:36:46 -- common/autotest_common.sh@936 -- # '[' -z 57259 ']' 00:06:33.063 06:36:46 -- common/autotest_common.sh@940 -- # kill -0 57259 00:06:33.063 06:36:46 -- common/autotest_common.sh@941 -- # uname 00:06:33.063 06:36:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:33.063 06:36:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57259 00:06:33.063 06:36:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:33.063 06:36:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:33.063 killing process with pid 57259 00:06:33.063 06:36:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57259' 00:06:33.063 06:36:46 -- common/autotest_common.sh@955 -- # kill 57259 00:06:33.063 06:36:46 -- common/autotest_common.sh@960 -- # wait 57259 00:06:33.321 00:06:33.321 real 0m1.020s 00:06:33.321 user 0m1.015s 00:06:33.321 sys 0m0.332s 00:06:33.321 06:36:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:33.321 06:36:47 -- common/autotest_common.sh@10 -- # set +x 00:06:33.321 ************************************ 00:06:33.321 END TEST accel_rpc 00:06:33.321 ************************************ 00:06:33.321 06:36:47 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:33.321 06:36:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:33.321 06:36:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:33.321 06:36:47 -- common/autotest_common.sh@10 -- # set +x 00:06:33.321 ************************************ 00:06:33.321 START TEST app_cmdline 00:06:33.321 ************************************ 00:06:33.321 06:36:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:33.321 * Looking for test storage... 00:06:33.321 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:33.321 06:36:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:33.321 06:36:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:33.321 06:36:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:33.580 06:36:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:33.580 06:36:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:33.580 06:36:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:33.580 06:36:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:33.580 06:36:47 -- scripts/common.sh@335 -- # IFS=.-: 00:06:33.580 06:36:47 -- scripts/common.sh@335 -- # read -ra ver1 00:06:33.580 06:36:47 -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.580 06:36:47 -- scripts/common.sh@336 -- # read -ra ver2 00:06:33.581 06:36:47 -- scripts/common.sh@337 -- # local 'op=<' 00:06:33.581 06:36:47 -- scripts/common.sh@339 -- # ver1_l=2 00:06:33.581 06:36:47 -- scripts/common.sh@340 -- # ver2_l=1 00:06:33.581 06:36:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:33.581 06:36:47 -- scripts/common.sh@343 -- # case "$op" in 00:06:33.581 06:36:47 -- scripts/common.sh@344 -- # : 1 00:06:33.581 06:36:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:33.581 06:36:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.581 06:36:47 -- scripts/common.sh@364 -- # decimal 1 00:06:33.581 06:36:47 -- scripts/common.sh@352 -- # local d=1 00:06:33.581 06:36:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.581 06:36:47 -- scripts/common.sh@354 -- # echo 1 00:06:33.581 06:36:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:33.581 06:36:47 -- scripts/common.sh@365 -- # decimal 2 00:06:33.581 06:36:47 -- scripts/common.sh@352 -- # local d=2 00:06:33.581 06:36:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.581 06:36:47 -- scripts/common.sh@354 -- # echo 2 00:06:33.581 06:36:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:33.581 06:36:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:33.581 06:36:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:33.581 06:36:47 -- scripts/common.sh@367 -- # return 0 00:06:33.581 06:36:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.581 06:36:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:33.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.581 --rc genhtml_branch_coverage=1 00:06:33.581 --rc genhtml_function_coverage=1 00:06:33.581 --rc genhtml_legend=1 00:06:33.581 --rc geninfo_all_blocks=1 00:06:33.581 --rc geninfo_unexecuted_blocks=1 00:06:33.581 00:06:33.581 ' 00:06:33.581 06:36:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:33.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.581 --rc genhtml_branch_coverage=1 00:06:33.581 --rc genhtml_function_coverage=1 00:06:33.581 --rc genhtml_legend=1 00:06:33.581 --rc geninfo_all_blocks=1 00:06:33.581 --rc geninfo_unexecuted_blocks=1 00:06:33.581 00:06:33.581 ' 00:06:33.581 06:36:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:33.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.581 --rc genhtml_branch_coverage=1 00:06:33.581 --rc genhtml_function_coverage=1 00:06:33.581 --rc genhtml_legend=1 00:06:33.581 --rc geninfo_all_blocks=1 00:06:33.581 --rc geninfo_unexecuted_blocks=1 00:06:33.581 00:06:33.581 ' 00:06:33.581 06:36:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:33.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.581 --rc genhtml_branch_coverage=1 00:06:33.581 --rc genhtml_function_coverage=1 00:06:33.581 --rc genhtml_legend=1 00:06:33.581 --rc geninfo_all_blocks=1 00:06:33.581 --rc geninfo_unexecuted_blocks=1 00:06:33.581 00:06:33.581 ' 00:06:33.581 06:36:47 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:33.581 06:36:47 -- app/cmdline.sh@17 -- # spdk_tgt_pid=57346 00:06:33.581 06:36:47 -- app/cmdline.sh@18 -- # waitforlisten 57346 00:06:33.581 06:36:47 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:33.581 06:36:47 -- common/autotest_common.sh@829 -- # '[' -z 57346 ']' 00:06:33.581 06:36:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.581 06:36:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.581 06:36:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.581 06:36:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.581 06:36:47 -- common/autotest_common.sh@10 -- # set +x 00:06:33.581 [2024-12-14 06:36:47.393974] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:33.581 [2024-12-14 06:36:47.394532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57346 ] 00:06:33.581 [2024-12-14 06:36:47.531474] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.839 [2024-12-14 06:36:47.582689] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:33.840 [2024-12-14 06:36:47.582838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.407 06:36:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.407 06:36:48 -- common/autotest_common.sh@862 -- # return 0 00:06:34.407 06:36:48 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:34.665 { 00:06:34.665 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:06:34.665 "fields": { 00:06:34.665 "major": 24, 00:06:34.665 "minor": 1, 00:06:34.665 "patch": 1, 00:06:34.665 "suffix": "-pre", 00:06:34.665 "commit": "c13c99a5e" 00:06:34.665 } 00:06:34.665 } 00:06:34.665 06:36:48 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:34.665 06:36:48 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:34.665 06:36:48 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:34.665 06:36:48 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:34.665 06:36:48 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:34.665 06:36:48 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:34.665 06:36:48 -- app/cmdline.sh@26 -- # sort 00:06:34.665 06:36:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.665 06:36:48 -- common/autotest_common.sh@10 -- # set +x 00:06:34.665 06:36:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.665 06:36:48 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:34.665 06:36:48 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:34.665 06:36:48 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:34.665 06:36:48 -- common/autotest_common.sh@650 -- # local es=0 00:06:34.665 06:36:48 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:34.665 06:36:48 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:34.665 06:36:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.665 06:36:48 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:34.665 06:36:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.665 06:36:48 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:34.665 06:36:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:34.665 06:36:48 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:34.665 06:36:48 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:34.665 06:36:48 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:34.924 request: 00:06:34.924 { 00:06:34.924 "method": "env_dpdk_get_mem_stats", 00:06:34.924 "req_id": 1 00:06:34.924 } 00:06:34.924 Got JSON-RPC error response 00:06:34.924 response: 00:06:34.924 { 00:06:34.924 "code": -32601, 00:06:34.924 "message": "Method not found" 00:06:34.924 } 00:06:34.924 06:36:48 -- common/autotest_common.sh@653 -- # es=1 00:06:34.924 06:36:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:34.924 06:36:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:34.924 06:36:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:34.924 06:36:48 -- app/cmdline.sh@1 -- # killprocess 57346 00:06:34.924 06:36:48 -- common/autotest_common.sh@936 -- # '[' -z 57346 ']' 00:06:34.924 06:36:48 -- common/autotest_common.sh@940 -- # kill -0 57346 00:06:34.924 06:36:48 -- common/autotest_common.sh@941 -- # uname 00:06:34.924 06:36:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:34.924 06:36:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57346 00:06:35.183 06:36:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:35.183 06:36:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:35.183 killing process with pid 57346 00:06:35.183 06:36:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57346' 00:06:35.183 06:36:48 -- common/autotest_common.sh@955 -- # kill 57346 00:06:35.183 06:36:48 -- common/autotest_common.sh@960 -- # wait 57346 00:06:35.442 00:06:35.442 real 0m2.027s 00:06:35.442 user 0m2.633s 00:06:35.442 sys 0m0.351s 00:06:35.442 06:36:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:35.442 06:36:49 -- common/autotest_common.sh@10 -- # set +x 00:06:35.442 ************************************ 00:06:35.442 END TEST app_cmdline 00:06:35.442 ************************************ 00:06:35.442 06:36:49 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:35.442 06:36:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:35.442 06:36:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.442 06:36:49 -- common/autotest_common.sh@10 -- # set +x 00:06:35.442 ************************************ 00:06:35.442 START TEST version 00:06:35.442 ************************************ 00:06:35.442 06:36:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:35.442 * Looking for test storage... 00:06:35.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:35.442 06:36:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:35.442 06:36:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:35.442 06:36:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:35.442 06:36:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:35.442 06:36:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:35.442 06:36:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:35.442 06:36:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:35.442 06:36:49 -- scripts/common.sh@335 -- # IFS=.-: 00:06:35.442 06:36:49 -- scripts/common.sh@335 -- # read -ra ver1 00:06:35.442 06:36:49 -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.442 06:36:49 -- scripts/common.sh@336 -- # read -ra ver2 00:06:35.442 06:36:49 -- scripts/common.sh@337 -- # local 'op=<' 00:06:35.442 06:36:49 -- scripts/common.sh@339 -- # ver1_l=2 00:06:35.442 06:36:49 -- scripts/common.sh@340 -- # ver2_l=1 00:06:35.442 06:36:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:35.442 06:36:49 -- scripts/common.sh@343 -- # case "$op" in 00:06:35.442 06:36:49 -- scripts/common.sh@344 -- # : 1 00:06:35.443 06:36:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:35.443 06:36:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.443 06:36:49 -- scripts/common.sh@364 -- # decimal 1 00:06:35.443 06:36:49 -- scripts/common.sh@352 -- # local d=1 00:06:35.443 06:36:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.443 06:36:49 -- scripts/common.sh@354 -- # echo 1 00:06:35.443 06:36:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:35.443 06:36:49 -- scripts/common.sh@365 -- # decimal 2 00:06:35.443 06:36:49 -- scripts/common.sh@352 -- # local d=2 00:06:35.443 06:36:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.443 06:36:49 -- scripts/common.sh@354 -- # echo 2 00:06:35.443 06:36:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:35.443 06:36:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:35.443 06:36:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:35.443 06:36:49 -- scripts/common.sh@367 -- # return 0 00:06:35.443 06:36:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.443 06:36:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:35.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.443 --rc genhtml_branch_coverage=1 00:06:35.443 --rc genhtml_function_coverage=1 00:06:35.443 --rc genhtml_legend=1 00:06:35.443 --rc geninfo_all_blocks=1 00:06:35.443 --rc geninfo_unexecuted_blocks=1 00:06:35.443 00:06:35.443 ' 00:06:35.443 06:36:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:35.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.443 --rc genhtml_branch_coverage=1 00:06:35.443 --rc genhtml_function_coverage=1 00:06:35.443 --rc genhtml_legend=1 00:06:35.443 --rc geninfo_all_blocks=1 00:06:35.443 --rc geninfo_unexecuted_blocks=1 00:06:35.443 00:06:35.443 ' 00:06:35.443 06:36:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:35.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.443 --rc genhtml_branch_coverage=1 00:06:35.443 --rc genhtml_function_coverage=1 00:06:35.443 --rc genhtml_legend=1 00:06:35.443 --rc geninfo_all_blocks=1 00:06:35.443 --rc geninfo_unexecuted_blocks=1 00:06:35.443 00:06:35.443 ' 00:06:35.443 06:36:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:35.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.443 --rc genhtml_branch_coverage=1 00:06:35.443 --rc genhtml_function_coverage=1 00:06:35.443 --rc genhtml_legend=1 00:06:35.443 --rc geninfo_all_blocks=1 00:06:35.443 --rc geninfo_unexecuted_blocks=1 00:06:35.443 00:06:35.443 ' 00:06:35.443 06:36:49 -- app/version.sh@17 -- # get_header_version major 00:06:35.443 06:36:49 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:35.443 06:36:49 -- app/version.sh@14 -- # cut -f2 00:06:35.443 06:36:49 -- app/version.sh@14 -- # tr -d '"' 00:06:35.443 06:36:49 -- app/version.sh@17 -- # major=24 00:06:35.443 06:36:49 -- app/version.sh@18 -- # get_header_version minor 00:06:35.443 06:36:49 -- app/version.sh@14 -- # cut -f2 00:06:35.443 06:36:49 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:35.443 06:36:49 -- app/version.sh@14 -- # tr -d '"' 00:06:35.443 06:36:49 -- app/version.sh@18 -- # minor=1 00:06:35.443 06:36:49 -- app/version.sh@19 -- # get_header_version patch 00:06:35.443 06:36:49 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:35.443 06:36:49 -- app/version.sh@14 -- # cut -f2 00:06:35.443 06:36:49 -- app/version.sh@14 -- # tr -d '"' 00:06:35.443 06:36:49 -- app/version.sh@19 -- # patch=1 00:06:35.702 06:36:49 -- app/version.sh@20 -- # get_header_version suffix 00:06:35.702 06:36:49 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:35.702 06:36:49 -- app/version.sh@14 -- # cut -f2 00:06:35.702 06:36:49 -- app/version.sh@14 -- # tr -d '"' 00:06:35.702 06:36:49 -- app/version.sh@20 -- # suffix=-pre 00:06:35.702 06:36:49 -- app/version.sh@22 -- # version=24.1 00:06:35.702 06:36:49 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:35.702 06:36:49 -- app/version.sh@25 -- # version=24.1.1 00:06:35.702 06:36:49 -- app/version.sh@28 -- # version=24.1.1rc0 00:06:35.702 06:36:49 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:35.702 06:36:49 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:35.702 06:36:49 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:06:35.702 06:36:49 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:06:35.702 00:06:35.702 real 0m0.235s 00:06:35.702 user 0m0.163s 00:06:35.702 sys 0m0.102s 00:06:35.702 06:36:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:35.702 06:36:49 -- common/autotest_common.sh@10 -- # set +x 00:06:35.702 ************************************ 00:06:35.702 END TEST version 00:06:35.702 ************************************ 00:06:35.702 06:36:49 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:06:35.702 06:36:49 -- spdk/autotest.sh@191 -- # uname -s 00:06:35.702 06:36:49 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:06:35.702 06:36:49 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:06:35.702 06:36:49 -- spdk/autotest.sh@192 -- # [[ 1 -eq 1 ]] 00:06:35.702 06:36:49 -- spdk/autotest.sh@198 -- # [[ 0 -eq 0 ]] 00:06:35.702 06:36:49 -- spdk/autotest.sh@199 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:35.702 06:36:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:35.702 06:36:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.702 06:36:49 -- common/autotest_common.sh@10 -- # set +x 00:06:35.702 ************************************ 00:06:35.702 START TEST spdk_dd 00:06:35.702 ************************************ 00:06:35.702 06:36:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:35.702 * Looking for test storage... 00:06:35.702 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:35.702 06:36:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:35.702 06:36:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:35.702 06:36:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:35.702 06:36:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:35.702 06:36:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:35.702 06:36:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:35.702 06:36:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:35.702 06:36:49 -- scripts/common.sh@335 -- # IFS=.-: 00:06:35.702 06:36:49 -- scripts/common.sh@335 -- # read -ra ver1 00:06:35.702 06:36:49 -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.702 06:36:49 -- scripts/common.sh@336 -- # read -ra ver2 00:06:35.702 06:36:49 -- scripts/common.sh@337 -- # local 'op=<' 00:06:35.702 06:36:49 -- scripts/common.sh@339 -- # ver1_l=2 00:06:35.702 06:36:49 -- scripts/common.sh@340 -- # ver2_l=1 00:06:35.702 06:36:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:35.702 06:36:49 -- scripts/common.sh@343 -- # case "$op" in 00:06:35.702 06:36:49 -- scripts/common.sh@344 -- # : 1 00:06:35.702 06:36:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:35.702 06:36:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.961 06:36:49 -- scripts/common.sh@364 -- # decimal 1 00:06:35.961 06:36:49 -- scripts/common.sh@352 -- # local d=1 00:06:35.961 06:36:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.961 06:36:49 -- scripts/common.sh@354 -- # echo 1 00:06:35.961 06:36:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:35.961 06:36:49 -- scripts/common.sh@365 -- # decimal 2 00:06:35.961 06:36:49 -- scripts/common.sh@352 -- # local d=2 00:06:35.961 06:36:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.961 06:36:49 -- scripts/common.sh@354 -- # echo 2 00:06:35.961 06:36:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:35.961 06:36:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:35.961 06:36:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:35.961 06:36:49 -- scripts/common.sh@367 -- # return 0 00:06:35.961 06:36:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.961 06:36:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:35.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.961 --rc genhtml_branch_coverage=1 00:06:35.961 --rc genhtml_function_coverage=1 00:06:35.961 --rc genhtml_legend=1 00:06:35.961 --rc geninfo_all_blocks=1 00:06:35.961 --rc geninfo_unexecuted_blocks=1 00:06:35.961 00:06:35.961 ' 00:06:35.961 06:36:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:35.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.961 --rc genhtml_branch_coverage=1 00:06:35.961 --rc genhtml_function_coverage=1 00:06:35.961 --rc genhtml_legend=1 00:06:35.961 --rc geninfo_all_blocks=1 00:06:35.961 --rc geninfo_unexecuted_blocks=1 00:06:35.961 00:06:35.961 ' 00:06:35.961 06:36:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:35.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.961 --rc genhtml_branch_coverage=1 00:06:35.961 --rc genhtml_function_coverage=1 00:06:35.961 --rc genhtml_legend=1 00:06:35.961 --rc geninfo_all_blocks=1 00:06:35.961 --rc geninfo_unexecuted_blocks=1 00:06:35.961 00:06:35.961 ' 00:06:35.961 06:36:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:35.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.961 --rc genhtml_branch_coverage=1 00:06:35.961 --rc genhtml_function_coverage=1 00:06:35.961 --rc genhtml_legend=1 00:06:35.961 --rc geninfo_all_blocks=1 00:06:35.961 --rc geninfo_unexecuted_blocks=1 00:06:35.961 00:06:35.961 ' 00:06:35.961 06:36:49 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:35.961 06:36:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:35.961 06:36:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:35.961 06:36:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:35.961 06:36:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.961 06:36:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.961 06:36:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.961 06:36:49 -- paths/export.sh@5 -- # export PATH 00:06:35.961 06:36:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.961 06:36:49 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:36.220 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:36.220 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:36.220 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:36.220 06:36:50 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:36.220 06:36:50 -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:36.220 06:36:50 -- scripts/common.sh@311 -- # local bdf bdfs 00:06:36.220 06:36:50 -- scripts/common.sh@312 -- # local nvmes 00:06:36.220 06:36:50 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:06:36.220 06:36:50 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:36.220 06:36:50 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:06:36.220 06:36:50 -- scripts/common.sh@297 -- # local bdf= 00:06:36.220 06:36:50 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:06:36.220 06:36:50 -- scripts/common.sh@232 -- # local class 00:06:36.220 06:36:50 -- scripts/common.sh@233 -- # local subclass 00:06:36.220 06:36:50 -- scripts/common.sh@234 -- # local progif 00:06:36.220 06:36:50 -- scripts/common.sh@235 -- # printf %02x 1 00:06:36.220 06:36:50 -- scripts/common.sh@235 -- # class=01 00:06:36.220 06:36:50 -- scripts/common.sh@236 -- # printf %02x 8 00:06:36.220 06:36:50 -- scripts/common.sh@236 -- # subclass=08 00:06:36.220 06:36:50 -- scripts/common.sh@237 -- # printf %02x 2 00:06:36.220 06:36:50 -- scripts/common.sh@237 -- # progif=02 00:06:36.220 06:36:50 -- scripts/common.sh@239 -- # hash lspci 00:06:36.220 06:36:50 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:06:36.220 06:36:50 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:06:36.220 06:36:50 -- scripts/common.sh@242 -- # grep -i -- -p02 00:06:36.220 06:36:50 -- scripts/common.sh@244 -- # tr -d '"' 00:06:36.220 06:36:50 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:36.220 06:36:50 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:36.220 06:36:50 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:06:36.220 06:36:50 -- scripts/common.sh@15 -- # local i 00:06:36.220 06:36:50 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:06:36.220 06:36:50 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:36.220 06:36:50 -- scripts/common.sh@24 -- # return 0 00:06:36.220 06:36:50 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:06:36.220 06:36:50 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:36.220 06:36:50 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:06:36.220 06:36:50 -- scripts/common.sh@15 -- # local i 00:06:36.220 06:36:50 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:06:36.220 06:36:50 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:36.220 06:36:50 -- scripts/common.sh@24 -- # return 0 00:06:36.220 06:36:50 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:06:36.220 06:36:50 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:06:36.220 06:36:50 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:06:36.220 06:36:50 -- scripts/common.sh@322 -- # uname -s 00:06:36.220 06:36:50 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:06:36.220 06:36:50 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:06:36.220 06:36:50 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:06:36.220 06:36:50 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:06:36.220 06:36:50 -- scripts/common.sh@322 -- # uname -s 00:06:36.220 06:36:50 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:06:36.220 06:36:50 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:06:36.220 06:36:50 -- scripts/common.sh@327 -- # (( 2 )) 00:06:36.220 06:36:50 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:06:36.220 06:36:50 -- dd/dd.sh@13 -- # check_liburing 00:06:36.220 06:36:50 -- dd/common.sh@139 -- # local lib so 00:06:36.220 06:36:50 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:06:36.220 06:36:50 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.5.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.5.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.6.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.5.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.5.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.5.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.5.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.5.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.5.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.5.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.5.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.5.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.9.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.10.1 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.9.1 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_blob.so.10.1 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.12.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.5.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.5.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.5.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.8.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.5.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.6.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.4.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.5.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.5.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.1.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.5.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.6.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.4.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.2.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.11.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.3.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.13.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.3.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.3.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.5.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.4.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_vfu_device.so.2.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_scsi.so.8.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_vfu_tgt.so.2.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_event.so.12.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.5.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.14.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_notify.so.5.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.5.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_accel.so.14.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_dma.so.3.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.5.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.5.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.4.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_sock.so.8.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.2.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_init.so.4.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_thread.so.9.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_trace.so.9.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.5.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.5.1 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_json.so.5.1 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_util.so.8.0 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libspdk_log.so.6.1 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libisal_crypto.so.2 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libaccel-config.so.1 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libiscsi.so.9 == liburing.so.* ]] 00:06:36.220 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.220 06:36:50 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:06:36.221 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.221 06:36:50 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:06:36.221 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.221 06:36:50 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:06:36.221 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.221 06:36:50 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:06:36.221 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.221 06:36:50 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:06:36.221 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.221 06:36:50 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:06:36.221 06:36:50 -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:36.221 06:36:50 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:36.221 06:36:50 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:36.221 * spdk_dd linked to liburing 00:06:36.221 06:36:50 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:36.221 06:36:50 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:36.221 06:36:50 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:36.221 06:36:50 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:36.221 06:36:50 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:36.221 06:36:50 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:36.221 06:36:50 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:36.221 06:36:50 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:36.221 06:36:50 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:36.221 06:36:50 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:36.221 06:36:50 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:36.221 06:36:50 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:36.221 06:36:50 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:36.221 06:36:50 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:36.221 06:36:50 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:36.221 06:36:50 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:36.221 06:36:50 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:36.221 06:36:50 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:36.221 06:36:50 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:36.221 06:36:50 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:36.221 06:36:50 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:36.221 06:36:50 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:36.221 06:36:50 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:36.221 06:36:50 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:36.221 06:36:50 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:36.221 06:36:50 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:36.221 06:36:50 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:36.221 06:36:50 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:36.221 06:36:50 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:36.221 06:36:50 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:36.221 06:36:50 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:36.221 06:36:50 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:36.221 06:36:50 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:36.221 06:36:50 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:36.221 06:36:50 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:36.221 06:36:50 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:36.221 06:36:50 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:36.221 06:36:50 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:36.221 06:36:50 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:36.221 06:36:50 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:36.221 06:36:50 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:36.221 06:36:50 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:36.221 06:36:50 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:36.221 06:36:50 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:36.221 06:36:50 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:36.221 06:36:50 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:36.221 06:36:50 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:36.221 06:36:50 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:06:36.221 06:36:50 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:06:36.221 06:36:50 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:36.221 06:36:50 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:06:36.221 06:36:50 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:06:36.221 06:36:50 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:06:36.221 06:36:50 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:06:36.221 06:36:50 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=y 00:06:36.221 06:36:50 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:06:36.221 06:36:50 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:06:36.221 06:36:50 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:06:36.221 06:36:50 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:06:36.221 06:36:50 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:06:36.221 06:36:50 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:06:36.221 06:36:50 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:06:36.221 06:36:50 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:06:36.221 06:36:50 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:06:36.221 06:36:50 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:06:36.221 06:36:50 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:06:36.221 06:36:50 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:06:36.221 06:36:50 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:36.221 06:36:50 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:06:36.221 06:36:50 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:06:36.221 06:36:50 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:06:36.221 06:36:50 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:06:36.221 06:36:50 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:06:36.221 06:36:50 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:06:36.221 06:36:50 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:06:36.221 06:36:50 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:06:36.221 06:36:50 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:06:36.221 06:36:50 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:06:36.221 06:36:50 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:36.221 06:36:50 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:06:36.221 06:36:50 -- common/build_config.sh@79 -- # CONFIG_URING=y 00:06:36.221 06:36:50 -- dd/common.sh@149 -- # [[ y != y ]] 00:06:36.221 06:36:50 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:06:36.221 06:36:50 -- dd/common.sh@156 -- # export liburing_in_use=1 00:06:36.221 06:36:50 -- dd/common.sh@156 -- # liburing_in_use=1 00:06:36.221 06:36:50 -- dd/common.sh@157 -- # return 0 00:06:36.221 06:36:50 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:36.221 06:36:50 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:06:36.221 06:36:50 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:36.221 06:36:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.221 06:36:50 -- common/autotest_common.sh@10 -- # set +x 00:06:36.221 ************************************ 00:06:36.221 START TEST spdk_dd_basic_rw 00:06:36.221 ************************************ 00:06:36.221 06:36:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:06:36.480 * Looking for test storage... 00:06:36.480 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:36.480 06:36:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:36.480 06:36:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:36.480 06:36:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:36.480 06:36:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:36.480 06:36:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:36.480 06:36:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:36.480 06:36:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:36.480 06:36:50 -- scripts/common.sh@335 -- # IFS=.-: 00:06:36.480 06:36:50 -- scripts/common.sh@335 -- # read -ra ver1 00:06:36.480 06:36:50 -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.480 06:36:50 -- scripts/common.sh@336 -- # read -ra ver2 00:06:36.480 06:36:50 -- scripts/common.sh@337 -- # local 'op=<' 00:06:36.480 06:36:50 -- scripts/common.sh@339 -- # ver1_l=2 00:06:36.480 06:36:50 -- scripts/common.sh@340 -- # ver2_l=1 00:06:36.480 06:36:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:36.480 06:36:50 -- scripts/common.sh@343 -- # case "$op" in 00:06:36.480 06:36:50 -- scripts/common.sh@344 -- # : 1 00:06:36.480 06:36:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:36.480 06:36:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.480 06:36:50 -- scripts/common.sh@364 -- # decimal 1 00:06:36.480 06:36:50 -- scripts/common.sh@352 -- # local d=1 00:06:36.480 06:36:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.480 06:36:50 -- scripts/common.sh@354 -- # echo 1 00:06:36.480 06:36:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:36.480 06:36:50 -- scripts/common.sh@365 -- # decimal 2 00:06:36.480 06:36:50 -- scripts/common.sh@352 -- # local d=2 00:06:36.480 06:36:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.480 06:36:50 -- scripts/common.sh@354 -- # echo 2 00:06:36.480 06:36:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:36.480 06:36:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:36.480 06:36:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:36.480 06:36:50 -- scripts/common.sh@367 -- # return 0 00:06:36.481 06:36:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.481 06:36:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:36.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.481 --rc genhtml_branch_coverage=1 00:06:36.481 --rc genhtml_function_coverage=1 00:06:36.481 --rc genhtml_legend=1 00:06:36.481 --rc geninfo_all_blocks=1 00:06:36.481 --rc geninfo_unexecuted_blocks=1 00:06:36.481 00:06:36.481 ' 00:06:36.481 06:36:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:36.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.481 --rc genhtml_branch_coverage=1 00:06:36.481 --rc genhtml_function_coverage=1 00:06:36.481 --rc genhtml_legend=1 00:06:36.481 --rc geninfo_all_blocks=1 00:06:36.481 --rc geninfo_unexecuted_blocks=1 00:06:36.481 00:06:36.481 ' 00:06:36.481 06:36:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:36.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.481 --rc genhtml_branch_coverage=1 00:06:36.481 --rc genhtml_function_coverage=1 00:06:36.481 --rc genhtml_legend=1 00:06:36.481 --rc geninfo_all_blocks=1 00:06:36.481 --rc geninfo_unexecuted_blocks=1 00:06:36.481 00:06:36.481 ' 00:06:36.481 06:36:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:36.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.481 --rc genhtml_branch_coverage=1 00:06:36.481 --rc genhtml_function_coverage=1 00:06:36.481 --rc genhtml_legend=1 00:06:36.481 --rc geninfo_all_blocks=1 00:06:36.481 --rc geninfo_unexecuted_blocks=1 00:06:36.481 00:06:36.481 ' 00:06:36.481 06:36:50 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:36.481 06:36:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.481 06:36:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.481 06:36:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.481 06:36:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.481 06:36:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.481 06:36:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.481 06:36:50 -- paths/export.sh@5 -- # export PATH 00:06:36.481 06:36:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.481 06:36:50 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:36.481 06:36:50 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:36.481 06:36:50 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:36.481 06:36:50 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:06:36.481 06:36:50 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:36.481 06:36:50 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:06:36.481 06:36:50 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:36.481 06:36:50 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:36.481 06:36:50 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:36.481 06:36:50 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:06:36.481 06:36:50 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:06:36.481 06:36:50 -- dd/common.sh@126 -- # mapfile -t id 00:06:36.481 06:36:50 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:06:36.743 06:36:50 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 98 Data Units Written: 9 Host Read Commands: 2240 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:36.743 06:36:50 -- dd/common.sh@130 -- # lbaf=04 00:06:36.743 06:36:50 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 98 Data Units Written: 9 Host Read Commands: 2240 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:36.743 06:36:50 -- dd/common.sh@132 -- # lbaf=4096 00:06:36.743 06:36:50 -- dd/common.sh@134 -- # echo 4096 00:06:36.743 06:36:50 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:36.743 06:36:50 -- dd/basic_rw.sh@96 -- # : 00:06:36.743 06:36:50 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:36.743 06:36:50 -- dd/basic_rw.sh@96 -- # gen_conf 00:06:36.743 06:36:50 -- dd/common.sh@31 -- # xtrace_disable 00:06:36.743 06:36:50 -- common/autotest_common.sh@10 -- # set +x 00:06:36.743 06:36:50 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:36.743 06:36:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.743 06:36:50 -- common/autotest_common.sh@10 -- # set +x 00:06:36.743 ************************************ 00:06:36.743 START TEST dd_bs_lt_native_bs 00:06:36.743 ************************************ 00:06:36.743 06:36:50 -- common/autotest_common.sh@1114 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:36.743 06:36:50 -- common/autotest_common.sh@650 -- # local es=0 00:06:36.743 06:36:50 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:36.743 06:36:50 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.743 06:36:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.743 06:36:50 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.743 06:36:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.743 06:36:50 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.743 06:36:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.743 06:36:50 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:36.743 06:36:50 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:36.744 06:36:50 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:36.744 { 00:06:36.744 "subsystems": [ 00:06:36.744 { 00:06:36.744 "subsystem": "bdev", 00:06:36.744 "config": [ 00:06:36.744 { 00:06:36.744 "params": { 00:06:36.744 "trtype": "pcie", 00:06:36.744 "traddr": "0000:00:06.0", 00:06:36.744 "name": "Nvme0" 00:06:36.744 }, 00:06:36.744 "method": "bdev_nvme_attach_controller" 00:06:36.744 }, 00:06:36.744 { 00:06:36.744 "method": "bdev_wait_for_examine" 00:06:36.744 } 00:06:36.744 ] 00:06:36.744 } 00:06:36.744 ] 00:06:36.744 } 00:06:36.744 [2024-12-14 06:36:50.627132] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:36.744 [2024-12-14 06:36:50.627219] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57690 ] 00:06:37.003 [2024-12-14 06:36:50.766985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.003 [2024-12-14 06:36:50.836003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.003 [2024-12-14 06:36:50.958913] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:37.003 [2024-12-14 06:36:50.958992] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:37.261 [2024-12-14 06:36:51.033388] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:37.261 06:36:51 -- common/autotest_common.sh@653 -- # es=234 00:06:37.261 06:36:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:37.261 06:36:51 -- common/autotest_common.sh@662 -- # es=106 00:06:37.261 06:36:51 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:37.261 06:36:51 -- common/autotest_common.sh@670 -- # es=1 00:06:37.261 06:36:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:37.261 00:06:37.261 real 0m0.557s 00:06:37.261 user 0m0.392s 00:06:37.261 sys 0m0.117s 00:06:37.261 06:36:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:37.261 06:36:51 -- common/autotest_common.sh@10 -- # set +x 00:06:37.261 ************************************ 00:06:37.261 END TEST dd_bs_lt_native_bs 00:06:37.261 ************************************ 00:06:37.261 06:36:51 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:37.261 06:36:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:37.261 06:36:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.261 06:36:51 -- common/autotest_common.sh@10 -- # set +x 00:06:37.261 ************************************ 00:06:37.261 START TEST dd_rw 00:06:37.261 ************************************ 00:06:37.261 06:36:51 -- common/autotest_common.sh@1114 -- # basic_rw 4096 00:06:37.261 06:36:51 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:37.261 06:36:51 -- dd/basic_rw.sh@12 -- # local count size 00:06:37.261 06:36:51 -- dd/basic_rw.sh@13 -- # local qds bss 00:06:37.261 06:36:51 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:37.261 06:36:51 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:37.261 06:36:51 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:37.261 06:36:51 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:37.261 06:36:51 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:37.261 06:36:51 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:37.261 06:36:51 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:37.261 06:36:51 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:37.261 06:36:51 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:37.261 06:36:51 -- dd/basic_rw.sh@23 -- # count=15 00:06:37.261 06:36:51 -- dd/basic_rw.sh@24 -- # count=15 00:06:37.261 06:36:51 -- dd/basic_rw.sh@25 -- # size=61440 00:06:37.261 06:36:51 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:37.261 06:36:51 -- dd/common.sh@98 -- # xtrace_disable 00:06:37.261 06:36:51 -- common/autotest_common.sh@10 -- # set +x 00:06:37.829 06:36:51 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:37.829 06:36:51 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:37.829 06:36:51 -- dd/common.sh@31 -- # xtrace_disable 00:06:37.829 06:36:51 -- common/autotest_common.sh@10 -- # set +x 00:06:37.829 [2024-12-14 06:36:51.803910] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:37.829 [2024-12-14 06:36:51.804017] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57723 ] 00:06:37.829 { 00:06:37.829 "subsystems": [ 00:06:37.829 { 00:06:37.829 "subsystem": "bdev", 00:06:37.829 "config": [ 00:06:37.829 { 00:06:37.829 "params": { 00:06:37.829 "trtype": "pcie", 00:06:37.829 "traddr": "0000:00:06.0", 00:06:37.829 "name": "Nvme0" 00:06:37.829 }, 00:06:37.829 "method": "bdev_nvme_attach_controller" 00:06:37.829 }, 00:06:37.829 { 00:06:37.829 "method": "bdev_wait_for_examine" 00:06:37.829 } 00:06:37.829 ] 00:06:37.829 } 00:06:37.829 ] 00:06:37.829 } 00:06:38.088 [2024-12-14 06:36:51.942094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.088 [2024-12-14 06:36:51.988601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.347  [2024-12-14T06:36:52.339Z] Copying: 60/60 [kB] (average 19 MBps) 00:06:38.347 00:06:38.347 06:36:52 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:38.347 06:36:52 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:38.347 06:36:52 -- dd/common.sh@31 -- # xtrace_disable 00:06:38.347 06:36:52 -- common/autotest_common.sh@10 -- # set +x 00:06:38.347 [2024-12-14 06:36:52.328955] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:38.347 [2024-12-14 06:36:52.329052] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57736 ] 00:06:38.606 { 00:06:38.606 "subsystems": [ 00:06:38.606 { 00:06:38.606 "subsystem": "bdev", 00:06:38.606 "config": [ 00:06:38.606 { 00:06:38.606 "params": { 00:06:38.606 "trtype": "pcie", 00:06:38.606 "traddr": "0000:00:06.0", 00:06:38.606 "name": "Nvme0" 00:06:38.606 }, 00:06:38.606 "method": "bdev_nvme_attach_controller" 00:06:38.606 }, 00:06:38.606 { 00:06:38.606 "method": "bdev_wait_for_examine" 00:06:38.606 } 00:06:38.606 ] 00:06:38.606 } 00:06:38.606 ] 00:06:38.606 } 00:06:38.606 [2024-12-14 06:36:52.456685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.606 [2024-12-14 06:36:52.503532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.866  [2024-12-14T06:36:52.858Z] Copying: 60/60 [kB] (average 19 MBps) 00:06:38.866 00:06:38.866 06:36:52 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:38.866 06:36:52 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:38.866 06:36:52 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:38.866 06:36:52 -- dd/common.sh@11 -- # local nvme_ref= 00:06:38.866 06:36:52 -- dd/common.sh@12 -- # local size=61440 00:06:38.866 06:36:52 -- dd/common.sh@14 -- # local bs=1048576 00:06:38.866 06:36:52 -- dd/common.sh@15 -- # local count=1 00:06:38.866 06:36:52 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:38.866 06:36:52 -- dd/common.sh@18 -- # gen_conf 00:06:38.866 06:36:52 -- dd/common.sh@31 -- # xtrace_disable 00:06:38.866 06:36:52 -- common/autotest_common.sh@10 -- # set +x 00:06:38.866 { 00:06:38.866 "subsystems": [ 00:06:38.866 { 00:06:38.866 "subsystem": "bdev", 00:06:38.866 "config": [ 00:06:38.866 { 00:06:38.866 "params": { 00:06:38.866 "trtype": "pcie", 00:06:38.866 "traddr": "0000:00:06.0", 00:06:38.866 "name": "Nvme0" 00:06:38.866 }, 00:06:38.866 "method": "bdev_nvme_attach_controller" 00:06:38.866 }, 00:06:38.866 { 00:06:38.866 "method": "bdev_wait_for_examine" 00:06:38.866 } 00:06:38.866 ] 00:06:38.866 } 00:06:38.866 ] 00:06:38.866 } 00:06:38.866 [2024-12-14 06:36:52.848660] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:38.866 [2024-12-14 06:36:52.848768] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57749 ] 00:06:39.125 [2024-12-14 06:36:52.985351] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.125 [2024-12-14 06:36:53.033405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.384  [2024-12-14T06:36:53.376Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:39.384 00:06:39.384 06:36:53 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:39.384 06:36:53 -- dd/basic_rw.sh@23 -- # count=15 00:06:39.384 06:36:53 -- dd/basic_rw.sh@24 -- # count=15 00:06:39.384 06:36:53 -- dd/basic_rw.sh@25 -- # size=61440 00:06:39.384 06:36:53 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:39.384 06:36:53 -- dd/common.sh@98 -- # xtrace_disable 00:06:39.384 06:36:53 -- common/autotest_common.sh@10 -- # set +x 00:06:39.952 06:36:53 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:39.952 06:36:53 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:39.952 06:36:53 -- dd/common.sh@31 -- # xtrace_disable 00:06:39.952 06:36:53 -- common/autotest_common.sh@10 -- # set +x 00:06:39.952 [2024-12-14 06:36:53.907128] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:39.952 [2024-12-14 06:36:53.907233] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57773 ] 00:06:39.952 { 00:06:39.952 "subsystems": [ 00:06:39.952 { 00:06:39.952 "subsystem": "bdev", 00:06:39.952 "config": [ 00:06:39.952 { 00:06:39.952 "params": { 00:06:39.952 "trtype": "pcie", 00:06:39.952 "traddr": "0000:00:06.0", 00:06:39.952 "name": "Nvme0" 00:06:39.952 }, 00:06:39.952 "method": "bdev_nvme_attach_controller" 00:06:39.952 }, 00:06:39.952 { 00:06:39.952 "method": "bdev_wait_for_examine" 00:06:39.952 } 00:06:39.952 ] 00:06:39.952 } 00:06:39.952 ] 00:06:39.952 } 00:06:40.225 [2024-12-14 06:36:54.044749] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.225 [2024-12-14 06:36:54.092723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.225  [2024-12-14T06:36:54.492Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:40.500 00:06:40.500 06:36:54 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:40.500 06:36:54 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:40.500 06:36:54 -- dd/common.sh@31 -- # xtrace_disable 00:06:40.500 06:36:54 -- common/autotest_common.sh@10 -- # set +x 00:06:40.500 [2024-12-14 06:36:54.431557] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:40.500 [2024-12-14 06:36:54.431672] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57780 ] 00:06:40.500 { 00:06:40.500 "subsystems": [ 00:06:40.500 { 00:06:40.500 "subsystem": "bdev", 00:06:40.500 "config": [ 00:06:40.500 { 00:06:40.500 "params": { 00:06:40.500 "trtype": "pcie", 00:06:40.500 "traddr": "0000:00:06.0", 00:06:40.500 "name": "Nvme0" 00:06:40.500 }, 00:06:40.500 "method": "bdev_nvme_attach_controller" 00:06:40.500 }, 00:06:40.500 { 00:06:40.500 "method": "bdev_wait_for_examine" 00:06:40.500 } 00:06:40.500 ] 00:06:40.500 } 00:06:40.500 ] 00:06:40.500 } 00:06:40.760 [2024-12-14 06:36:54.564562] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.760 [2024-12-14 06:36:54.618104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.760  [2024-12-14T06:36:55.011Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:41.019 00:06:41.019 06:36:54 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:41.019 06:36:54 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:41.019 06:36:54 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:41.019 06:36:54 -- dd/common.sh@11 -- # local nvme_ref= 00:06:41.019 06:36:54 -- dd/common.sh@12 -- # local size=61440 00:06:41.019 06:36:54 -- dd/common.sh@14 -- # local bs=1048576 00:06:41.019 06:36:54 -- dd/common.sh@15 -- # local count=1 00:06:41.019 06:36:54 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:41.019 06:36:54 -- dd/common.sh@18 -- # gen_conf 00:06:41.019 06:36:54 -- dd/common.sh@31 -- # xtrace_disable 00:06:41.019 06:36:54 -- common/autotest_common.sh@10 -- # set +x 00:06:41.019 [2024-12-14 06:36:54.965898] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:41.019 [2024-12-14 06:36:54.966002] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57799 ] 00:06:41.019 { 00:06:41.019 "subsystems": [ 00:06:41.019 { 00:06:41.019 "subsystem": "bdev", 00:06:41.019 "config": [ 00:06:41.019 { 00:06:41.019 "params": { 00:06:41.019 "trtype": "pcie", 00:06:41.019 "traddr": "0000:00:06.0", 00:06:41.019 "name": "Nvme0" 00:06:41.019 }, 00:06:41.019 "method": "bdev_nvme_attach_controller" 00:06:41.019 }, 00:06:41.019 { 00:06:41.019 "method": "bdev_wait_for_examine" 00:06:41.019 } 00:06:41.019 ] 00:06:41.019 } 00:06:41.019 ] 00:06:41.019 } 00:06:41.278 [2024-12-14 06:36:55.102941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.278 [2024-12-14 06:36:55.151203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.278  [2024-12-14T06:36:55.529Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:41.537 00:06:41.537 06:36:55 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:41.537 06:36:55 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:41.537 06:36:55 -- dd/basic_rw.sh@23 -- # count=7 00:06:41.537 06:36:55 -- dd/basic_rw.sh@24 -- # count=7 00:06:41.537 06:36:55 -- dd/basic_rw.sh@25 -- # size=57344 00:06:41.537 06:36:55 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:41.537 06:36:55 -- dd/common.sh@98 -- # xtrace_disable 00:06:41.537 06:36:55 -- common/autotest_common.sh@10 -- # set +x 00:06:42.104 06:36:55 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:42.104 06:36:55 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:42.104 06:36:55 -- dd/common.sh@31 -- # xtrace_disable 00:06:42.104 06:36:55 -- common/autotest_common.sh@10 -- # set +x 00:06:42.104 [2024-12-14 06:36:56.032630] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:42.104 [2024-12-14 06:36:56.032745] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57817 ] 00:06:42.104 { 00:06:42.104 "subsystems": [ 00:06:42.104 { 00:06:42.104 "subsystem": "bdev", 00:06:42.104 "config": [ 00:06:42.104 { 00:06:42.104 "params": { 00:06:42.104 "trtype": "pcie", 00:06:42.104 "traddr": "0000:00:06.0", 00:06:42.104 "name": "Nvme0" 00:06:42.104 }, 00:06:42.104 "method": "bdev_nvme_attach_controller" 00:06:42.104 }, 00:06:42.104 { 00:06:42.104 "method": "bdev_wait_for_examine" 00:06:42.104 } 00:06:42.104 ] 00:06:42.104 } 00:06:42.104 ] 00:06:42.104 } 00:06:42.363 [2024-12-14 06:36:56.170601] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.363 [2024-12-14 06:36:56.217100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.363  [2024-12-14T06:36:56.614Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:42.622 00:06:42.622 06:36:56 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:42.622 06:36:56 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:42.622 06:36:56 -- dd/common.sh@31 -- # xtrace_disable 00:06:42.622 06:36:56 -- common/autotest_common.sh@10 -- # set +x 00:06:42.622 [2024-12-14 06:36:56.563310] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:42.622 [2024-12-14 06:36:56.563449] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57824 ] 00:06:42.622 { 00:06:42.622 "subsystems": [ 00:06:42.622 { 00:06:42.622 "subsystem": "bdev", 00:06:42.622 "config": [ 00:06:42.622 { 00:06:42.622 "params": { 00:06:42.622 "trtype": "pcie", 00:06:42.622 "traddr": "0000:00:06.0", 00:06:42.622 "name": "Nvme0" 00:06:42.622 }, 00:06:42.622 "method": "bdev_nvme_attach_controller" 00:06:42.622 }, 00:06:42.622 { 00:06:42.622 "method": "bdev_wait_for_examine" 00:06:42.622 } 00:06:42.622 ] 00:06:42.622 } 00:06:42.622 ] 00:06:42.622 } 00:06:42.880 [2024-12-14 06:36:56.703013] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.880 [2024-12-14 06:36:56.753476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.880  [2024-12-14T06:36:57.131Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:43.139 00:06:43.139 06:36:57 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:43.139 06:36:57 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:43.139 06:36:57 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:43.139 06:36:57 -- dd/common.sh@11 -- # local nvme_ref= 00:06:43.139 06:36:57 -- dd/common.sh@12 -- # local size=57344 00:06:43.139 06:36:57 -- dd/common.sh@14 -- # local bs=1048576 00:06:43.139 06:36:57 -- dd/common.sh@15 -- # local count=1 00:06:43.139 06:36:57 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:43.139 06:36:57 -- dd/common.sh@18 -- # gen_conf 00:06:43.139 06:36:57 -- dd/common.sh@31 -- # xtrace_disable 00:06:43.139 06:36:57 -- common/autotest_common.sh@10 -- # set +x 00:06:43.139 [2024-12-14 06:36:57.101811] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:43.139 [2024-12-14 06:36:57.101928] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57843 ] 00:06:43.139 { 00:06:43.139 "subsystems": [ 00:06:43.139 { 00:06:43.139 "subsystem": "bdev", 00:06:43.139 "config": [ 00:06:43.139 { 00:06:43.139 "params": { 00:06:43.139 "trtype": "pcie", 00:06:43.139 "traddr": "0000:00:06.0", 00:06:43.139 "name": "Nvme0" 00:06:43.139 }, 00:06:43.139 "method": "bdev_nvme_attach_controller" 00:06:43.139 }, 00:06:43.139 { 00:06:43.139 "method": "bdev_wait_for_examine" 00:06:43.139 } 00:06:43.139 ] 00:06:43.139 } 00:06:43.139 ] 00:06:43.139 } 00:06:43.398 [2024-12-14 06:36:57.238668] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.399 [2024-12-14 06:36:57.284974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.658  [2024-12-14T06:36:57.650Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:43.658 00:06:43.658 06:36:57 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:43.658 06:36:57 -- dd/basic_rw.sh@23 -- # count=7 00:06:43.658 06:36:57 -- dd/basic_rw.sh@24 -- # count=7 00:06:43.658 06:36:57 -- dd/basic_rw.sh@25 -- # size=57344 00:06:43.658 06:36:57 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:43.658 06:36:57 -- dd/common.sh@98 -- # xtrace_disable 00:06:43.658 06:36:57 -- common/autotest_common.sh@10 -- # set +x 00:06:44.226 06:36:58 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:44.226 06:36:58 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:44.226 06:36:58 -- dd/common.sh@31 -- # xtrace_disable 00:06:44.226 06:36:58 -- common/autotest_common.sh@10 -- # set +x 00:06:44.226 [2024-12-14 06:36:58.131473] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:44.226 [2024-12-14 06:36:58.131584] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57861 ] 00:06:44.226 { 00:06:44.226 "subsystems": [ 00:06:44.226 { 00:06:44.226 "subsystem": "bdev", 00:06:44.226 "config": [ 00:06:44.226 { 00:06:44.226 "params": { 00:06:44.226 "trtype": "pcie", 00:06:44.226 "traddr": "0000:00:06.0", 00:06:44.226 "name": "Nvme0" 00:06:44.226 }, 00:06:44.226 "method": "bdev_nvme_attach_controller" 00:06:44.226 }, 00:06:44.226 { 00:06:44.226 "method": "bdev_wait_for_examine" 00:06:44.226 } 00:06:44.226 ] 00:06:44.226 } 00:06:44.226 ] 00:06:44.226 } 00:06:44.485 [2024-12-14 06:36:58.269930] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.485 [2024-12-14 06:36:58.318791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.485  [2024-12-14T06:36:58.736Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:44.744 00:06:44.744 06:36:58 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:44.744 06:36:58 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:44.744 06:36:58 -- dd/common.sh@31 -- # xtrace_disable 00:06:44.744 06:36:58 -- common/autotest_common.sh@10 -- # set +x 00:06:44.744 [2024-12-14 06:36:58.644405] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:44.744 [2024-12-14 06:36:58.644495] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57868 ] 00:06:44.744 { 00:06:44.744 "subsystems": [ 00:06:44.744 { 00:06:44.744 "subsystem": "bdev", 00:06:44.744 "config": [ 00:06:44.744 { 00:06:44.744 "params": { 00:06:44.744 "trtype": "pcie", 00:06:44.744 "traddr": "0000:00:06.0", 00:06:44.744 "name": "Nvme0" 00:06:44.744 }, 00:06:44.744 "method": "bdev_nvme_attach_controller" 00:06:44.744 }, 00:06:44.744 { 00:06:44.744 "method": "bdev_wait_for_examine" 00:06:44.744 } 00:06:44.744 ] 00:06:44.744 } 00:06:44.744 ] 00:06:44.744 } 00:06:45.003 [2024-12-14 06:36:58.782309] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.003 [2024-12-14 06:36:58.829114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.003  [2024-12-14T06:36:59.254Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:45.262 00:06:45.262 06:36:59 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:45.262 06:36:59 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:45.262 06:36:59 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:45.262 06:36:59 -- dd/common.sh@11 -- # local nvme_ref= 00:06:45.262 06:36:59 -- dd/common.sh@12 -- # local size=57344 00:06:45.262 06:36:59 -- dd/common.sh@14 -- # local bs=1048576 00:06:45.262 06:36:59 -- dd/common.sh@15 -- # local count=1 00:06:45.262 06:36:59 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:45.262 06:36:59 -- dd/common.sh@18 -- # gen_conf 00:06:45.262 06:36:59 -- dd/common.sh@31 -- # xtrace_disable 00:06:45.262 06:36:59 -- common/autotest_common.sh@10 -- # set +x 00:06:45.262 [2024-12-14 06:36:59.173148] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:45.262 [2024-12-14 06:36:59.173704] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57887 ] 00:06:45.262 { 00:06:45.262 "subsystems": [ 00:06:45.262 { 00:06:45.262 "subsystem": "bdev", 00:06:45.262 "config": [ 00:06:45.262 { 00:06:45.262 "params": { 00:06:45.262 "trtype": "pcie", 00:06:45.262 "traddr": "0000:00:06.0", 00:06:45.262 "name": "Nvme0" 00:06:45.262 }, 00:06:45.262 "method": "bdev_nvme_attach_controller" 00:06:45.262 }, 00:06:45.262 { 00:06:45.262 "method": "bdev_wait_for_examine" 00:06:45.262 } 00:06:45.262 ] 00:06:45.262 } 00:06:45.262 ] 00:06:45.262 } 00:06:45.521 [2024-12-14 06:36:59.310531] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.521 [2024-12-14 06:36:59.357841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.521  [2024-12-14T06:36:59.772Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:45.780 00:06:45.780 06:36:59 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:45.780 06:36:59 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:45.780 06:36:59 -- dd/basic_rw.sh@23 -- # count=3 00:06:45.780 06:36:59 -- dd/basic_rw.sh@24 -- # count=3 00:06:45.780 06:36:59 -- dd/basic_rw.sh@25 -- # size=49152 00:06:45.780 06:36:59 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:45.780 06:36:59 -- dd/common.sh@98 -- # xtrace_disable 00:06:45.780 06:36:59 -- common/autotest_common.sh@10 -- # set +x 00:06:46.346 06:37:00 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:46.347 06:37:00 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:46.347 06:37:00 -- dd/common.sh@31 -- # xtrace_disable 00:06:46.347 06:37:00 -- common/autotest_common.sh@10 -- # set +x 00:06:46.347 [2024-12-14 06:37:00.134703] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:46.347 [2024-12-14 06:37:00.134798] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57905 ] 00:06:46.347 { 00:06:46.347 "subsystems": [ 00:06:46.347 { 00:06:46.347 "subsystem": "bdev", 00:06:46.347 "config": [ 00:06:46.347 { 00:06:46.347 "params": { 00:06:46.347 "trtype": "pcie", 00:06:46.347 "traddr": "0000:00:06.0", 00:06:46.347 "name": "Nvme0" 00:06:46.347 }, 00:06:46.347 "method": "bdev_nvme_attach_controller" 00:06:46.347 }, 00:06:46.347 { 00:06:46.347 "method": "bdev_wait_for_examine" 00:06:46.347 } 00:06:46.347 ] 00:06:46.347 } 00:06:46.347 ] 00:06:46.347 } 00:06:46.347 [2024-12-14 06:37:00.270477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.347 [2024-12-14 06:37:00.317053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.605  [2024-12-14T06:37:00.597Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:46.605 00:06:46.605 06:37:00 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:46.605 06:37:00 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:46.605 06:37:00 -- dd/common.sh@31 -- # xtrace_disable 00:06:46.605 06:37:00 -- common/autotest_common.sh@10 -- # set +x 00:06:46.864 [2024-12-14 06:37:00.643836] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:46.864 [2024-12-14 06:37:00.643961] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57912 ] 00:06:46.864 { 00:06:46.864 "subsystems": [ 00:06:46.864 { 00:06:46.864 "subsystem": "bdev", 00:06:46.864 "config": [ 00:06:46.864 { 00:06:46.864 "params": { 00:06:46.864 "trtype": "pcie", 00:06:46.864 "traddr": "0000:00:06.0", 00:06:46.864 "name": "Nvme0" 00:06:46.864 }, 00:06:46.864 "method": "bdev_nvme_attach_controller" 00:06:46.864 }, 00:06:46.864 { 00:06:46.864 "method": "bdev_wait_for_examine" 00:06:46.864 } 00:06:46.864 ] 00:06:46.864 } 00:06:46.864 ] 00:06:46.864 } 00:06:46.864 [2024-12-14 06:37:00.781863] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.864 [2024-12-14 06:37:00.831175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.123  [2024-12-14T06:37:01.374Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:47.382 00:06:47.382 06:37:01 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:47.382 06:37:01 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:47.382 06:37:01 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:47.382 06:37:01 -- dd/common.sh@11 -- # local nvme_ref= 00:06:47.382 06:37:01 -- dd/common.sh@12 -- # local size=49152 00:06:47.382 06:37:01 -- dd/common.sh@14 -- # local bs=1048576 00:06:47.382 06:37:01 -- dd/common.sh@15 -- # local count=1 00:06:47.382 06:37:01 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:47.382 06:37:01 -- dd/common.sh@18 -- # gen_conf 00:06:47.382 06:37:01 -- dd/common.sh@31 -- # xtrace_disable 00:06:47.382 06:37:01 -- common/autotest_common.sh@10 -- # set +x 00:06:47.382 [2024-12-14 06:37:01.178834] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:47.382 [2024-12-14 06:37:01.178960] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57931 ] 00:06:47.382 { 00:06:47.382 "subsystems": [ 00:06:47.382 { 00:06:47.382 "subsystem": "bdev", 00:06:47.382 "config": [ 00:06:47.382 { 00:06:47.382 "params": { 00:06:47.382 "trtype": "pcie", 00:06:47.382 "traddr": "0000:00:06.0", 00:06:47.382 "name": "Nvme0" 00:06:47.382 }, 00:06:47.382 "method": "bdev_nvme_attach_controller" 00:06:47.382 }, 00:06:47.382 { 00:06:47.382 "method": "bdev_wait_for_examine" 00:06:47.382 } 00:06:47.382 ] 00:06:47.382 } 00:06:47.382 ] 00:06:47.382 } 00:06:47.382 [2024-12-14 06:37:01.314522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.382 [2024-12-14 06:37:01.363002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.641  [2024-12-14T06:37:01.892Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:47.900 00:06:47.900 06:37:01 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:47.900 06:37:01 -- dd/basic_rw.sh@23 -- # count=3 00:06:47.900 06:37:01 -- dd/basic_rw.sh@24 -- # count=3 00:06:47.900 06:37:01 -- dd/basic_rw.sh@25 -- # size=49152 00:06:47.900 06:37:01 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:47.900 06:37:01 -- dd/common.sh@98 -- # xtrace_disable 00:06:47.900 06:37:01 -- common/autotest_common.sh@10 -- # set +x 00:06:48.158 06:37:02 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:48.158 06:37:02 -- dd/basic_rw.sh@30 -- # gen_conf 00:06:48.158 06:37:02 -- dd/common.sh@31 -- # xtrace_disable 00:06:48.158 06:37:02 -- common/autotest_common.sh@10 -- # set +x 00:06:48.158 [2024-12-14 06:37:02.142072] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.158 [2024-12-14 06:37:02.142683] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57949 ] 00:06:48.158 { 00:06:48.158 "subsystems": [ 00:06:48.158 { 00:06:48.158 "subsystem": "bdev", 00:06:48.158 "config": [ 00:06:48.158 { 00:06:48.158 "params": { 00:06:48.158 "trtype": "pcie", 00:06:48.158 "traddr": "0000:00:06.0", 00:06:48.158 "name": "Nvme0" 00:06:48.158 }, 00:06:48.158 "method": "bdev_nvme_attach_controller" 00:06:48.158 }, 00:06:48.158 { 00:06:48.158 "method": "bdev_wait_for_examine" 00:06:48.158 } 00:06:48.158 ] 00:06:48.158 } 00:06:48.158 ] 00:06:48.158 } 00:06:48.417 [2024-12-14 06:37:02.279081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.417 [2024-12-14 06:37:02.326280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.676  [2024-12-14T06:37:02.668Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:48.676 00:06:48.676 06:37:02 -- dd/basic_rw.sh@37 -- # gen_conf 00:06:48.676 06:37:02 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:48.676 06:37:02 -- dd/common.sh@31 -- # xtrace_disable 00:06:48.676 06:37:02 -- common/autotest_common.sh@10 -- # set +x 00:06:48.676 [2024-12-14 06:37:02.661120] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.676 [2024-12-14 06:37:02.661234] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57957 ] 00:06:48.676 { 00:06:48.676 "subsystems": [ 00:06:48.676 { 00:06:48.676 "subsystem": "bdev", 00:06:48.676 "config": [ 00:06:48.676 { 00:06:48.676 "params": { 00:06:48.676 "trtype": "pcie", 00:06:48.676 "traddr": "0000:00:06.0", 00:06:48.676 "name": "Nvme0" 00:06:48.676 }, 00:06:48.676 "method": "bdev_nvme_attach_controller" 00:06:48.676 }, 00:06:48.676 { 00:06:48.676 "method": "bdev_wait_for_examine" 00:06:48.676 } 00:06:48.676 ] 00:06:48.676 } 00:06:48.676 ] 00:06:48.676 } 00:06:48.935 [2024-12-14 06:37:02.798445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.935 [2024-12-14 06:37:02.851474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.194  [2024-12-14T06:37:03.186Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:49.194 00:06:49.194 06:37:03 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:49.194 06:37:03 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:49.194 06:37:03 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:49.194 06:37:03 -- dd/common.sh@11 -- # local nvme_ref= 00:06:49.194 06:37:03 -- dd/common.sh@12 -- # local size=49152 00:06:49.194 06:37:03 -- dd/common.sh@14 -- # local bs=1048576 00:06:49.194 06:37:03 -- dd/common.sh@15 -- # local count=1 00:06:49.194 06:37:03 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:49.194 06:37:03 -- dd/common.sh@18 -- # gen_conf 00:06:49.194 06:37:03 -- dd/common.sh@31 -- # xtrace_disable 00:06:49.194 06:37:03 -- common/autotest_common.sh@10 -- # set +x 00:06:49.452 [2024-12-14 06:37:03.197139] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:49.452 [2024-12-14 06:37:03.197237] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57976 ] 00:06:49.452 { 00:06:49.452 "subsystems": [ 00:06:49.452 { 00:06:49.452 "subsystem": "bdev", 00:06:49.452 "config": [ 00:06:49.452 { 00:06:49.452 "params": { 00:06:49.452 "trtype": "pcie", 00:06:49.452 "traddr": "0000:00:06.0", 00:06:49.452 "name": "Nvme0" 00:06:49.452 }, 00:06:49.452 "method": "bdev_nvme_attach_controller" 00:06:49.452 }, 00:06:49.452 { 00:06:49.452 "method": "bdev_wait_for_examine" 00:06:49.452 } 00:06:49.452 ] 00:06:49.452 } 00:06:49.452 ] 00:06:49.452 } 00:06:49.452 [2024-12-14 06:37:03.329977] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.452 [2024-12-14 06:37:03.377094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.711  [2024-12-14T06:37:03.703Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:49.711 00:06:49.711 00:06:49.711 real 0m12.470s 00:06:49.711 user 0m9.323s 00:06:49.711 sys 0m2.088s 00:06:49.711 06:37:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:49.711 06:37:03 -- common/autotest_common.sh@10 -- # set +x 00:06:49.711 ************************************ 00:06:49.711 END TEST dd_rw 00:06:49.711 ************************************ 00:06:49.711 06:37:03 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:49.711 06:37:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:49.711 06:37:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.711 06:37:03 -- common/autotest_common.sh@10 -- # set +x 00:06:49.970 ************************************ 00:06:49.970 START TEST dd_rw_offset 00:06:49.970 ************************************ 00:06:49.970 06:37:03 -- common/autotest_common.sh@1114 -- # basic_offset 00:06:49.970 06:37:03 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:49.970 06:37:03 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:49.970 06:37:03 -- dd/common.sh@98 -- # xtrace_disable 00:06:49.970 06:37:03 -- common/autotest_common.sh@10 -- # set +x 00:06:49.970 06:37:03 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:49.970 06:37:03 -- dd/basic_rw.sh@56 -- # data=vszkjsn4gxetssn98120xitskrvq6z09qkn8me2aeyll4ktqdqw531m9uo07byx5jrh02pzd7hp1bxlh0itj79waadykysqx4wfuo7jf17skihuexoupvo94fjldqpvfq3r45b6941hx8d6shlzxfgwuz34sikkjpn3t2snknlx9did1gkie60tho6zc61ugo2qmwgaqxt7yurj14wsssch3juqcodnb7r2l9kjbpsr0plnk96nkbx4vc2kf3seuejoiic7pafljwmkrhip6z90546r6zkavd71vtfmj8vc6ms7x9uas9t86fvt9m21qorxy4w6v01ngzt08ui8kmh59d7fd4k3y9inzml0xheapbj4j1bjisu1a8mnpd54a6qmmjmsw3eazuyq5nad0m6exusrb5aw5ayr5nin52g78li92638u2jm6c5gu4j9gh9cmd270tluehz13jldgznevv70cptxmhh1bp4hcw9m4qtxqsgka4s9lio507xxgkrkjmgjvas28n5xu6m2u9r97nqe4ovul7qu5mur2p59ogukz1z7a5pyl7pvdlbyvxwum9fszpw3hsp12b0afd4uup8k4welpm31peot3vrtptf6detbrtx91vrk0ks2fwu5z08kz3ou7y03y9apl163g0epa7hwrukwgyukra3gukr34fz9mvej99mrzvde05ilshpmlhl25xn6w8b1ug5jljkkg2so6j00exyvsqs6swr2zxaxxqvzw3ejzhcj17g62phmd6gnu0bct2yj5gw3z4hnvf8yp2gezg9tcwe5mj9ow8g1mhi6nzx1se0dfvy51vroo88o9pyz6zuz9zu5gkv0ajztqgl5acqrr2pdojbtoxk7h68wyw1z1w7nsmhufhc3liht94l7irkq3egdp1nbhka6sojnpkbwaiht98b470vj34oh3rkgbkefhegzttccjwxomj9vzcau9lzw9didk2h4nj35xuyoosstxjc0b042ac8drsrzc6ehyu2yrpa701kwpuow7d6dcsf8keoq32bvskn4ecm8xv1faordxg71a8vceyy99etxhv7tkfr32flysxaw3bxkyjjx22gagpkqjgx2z0gsesadcoq7mihh7wstrj3m7nh4gkeqvd7s2xcmbghrgqauuk5r0538dm1ks730lb2ucn4d7o77tqd4jmqk16mbwcxyhw5ozctwwkltyd30a2gggx6zrowbvsneoqyj0g16atu6wmcndt81d7cpfdhxt63y7h42cz1r73oq9ph2rsvjlkkama1skodlbxzbf94abg9aj7ut3dc8gy7ea8youuv72h9yobpsyx6v0q27gybwcbce8o3vlwfixpmwt6i8nsu6a22obhibel24wzmf71mku6wsur2psi54n463nhiys28q3fyyb6t57ta6s9abxvtfq5qtjpugjhhnqt0pxiy61ei645dqqoxchvpsm1p9sgp30ln2e4h04srhfntijf23muur3i72w82tc60lwpg1xyy8kjihh04qqa3cocv5im4u9joyswh8nmffv89hf8mfc20ij1jhfy6axwb5o36m1tzny1rfjk5uj4txwjv8tmcap4ow7jlroy2yl8jwzw74umkn2pz2cfru1thskznbxa9ks0s4yqf2qbnix0h8fiy0u6j02uupkd462n98z27ajf2il5sk8nerucviy5tdftu6e647a4z9vo9b25ns8cvi6rqvwnmvjbshcbenovychl5d0depwvkcyazi5fwqcdobd6dtz5p2cryducrpypeeq5ku397cu72njv8gabjclyfa9vz9ct3pgams31pq8cl0vxohzd22w1sfhlbzzay2fuu5bonwydse0dudk5pq33zn07jzhtzq93p0jag36xk3ncxyni0rsgnbrzjh3a23zihxfshdripgsgmfo9qc2e06a4atmmzz47sjl0wl5zquk1mqzaa6i7gp2rzpjvvsapxjwtkj39ib7rijowipm1ymuyffiifxad3qrhbf7mmjz62ggolpdzis9qbv38ih81917v13rf5sd0atw7dl0lzbvmgzqsae2fjggh2ppu7masi5enmyw4xhfgl3yraulad9vc1qonhfln619sl1o4y2akwy44mhda6dlb698a7dber6j1t8392gwea0hgmwk3ti350875c8tbk4jcv0rfls693mpk1cauxlf14bdsu5h2ulgs2v03hi5ipqp0qlidgqv19f41g4eg3agpslu7gt0jqvvyun6wi13z92p4zki8wbcj3mo8qf4r12noh9z3xzzs4ptqrzblqw27nn818eapz18vm2hnjobmpx6i3o4mzbzicq4a9dxao3f9yusdontfcv12dba879eu8zz786m8zcupyduwchqvp4pj02fyyiiz741t83c5iti2ua60cnvfxb8axoceuvd0922fzr138qiu2iwkqmrv3re6ta5gb72xyv2rvglrtvnee2ocyff5ew5heaec467i6hjq88r1qulypawxlhewu0k7846tdegomzcuu2xza681cp2k04hsk44gyxv72yatiygkvdjdnk66bmng0ev79rvz6go2iqt9dgl2m00iz0aka37l5ms8qcfyxtenurv6ktwps6vsl6e3kgs26nstt9s6wi9o4tf4fv4zhbnq7b9cqqjum4576qgjwozq2b9nclkjp64m34cd4yofg6fk3nn9i1vsk7k55tgwk18srrv5sxqmyhyg3df8cg65rs7uo6ckow8el7h54a95ouxqy7blup9vqg7vem1tzvxly78nlxk7hqtb7ib2qv0tcav2y6xtwsxsez84xkkgd9vq1rhwmj3thuyzhg9506eds7k5ad0dmoh4db3i6ycjcxhupqsit2pxk6gc9j2rn7y6jjcy2e7bgp201del0oot7j1u4pkr4vskilrp3uutonceal3t6i43nl21vd5x76wmwc6ajocazgswkqtriofhsu8t9dkszsl2wft3l3couuibb3dsf8xn2m5qy141xqp6768d2wkw5peatzpher9z3gpulprgxdm3y4br18c537zt2rhk0qx9yxhx041q66s7f19lzzadjyy51el3swqxq73u0yebd9joibl4m8mchiqm43f5u3i5sson6gel6lpiv3m40cx0245nqsrnexb6c3vr1uzj4ot3fsced8gjkg7y1h1zze2x9ptex3ewrvdv3hav8666robmm72846pnjvns0ok4me557hcdsxs5y91nn8tgxpa6v3vpyw2qdetraq44qjw0t6djo9a0ek0mziztv6u80hdciterk8d8qjedfhiltxk6gbr3q3ovwd0sq6iv3oajrwi49fxg97r2znrwdz9sd53028kfx2iq26i1rc6zkztt838pjecn4jbavczbu3pqd5utp0aun2l5ssyadrieja0wgdtbplr1ir0h1pcjfaxtr9nb6lorrtu6jys3zwmuphi669bu2h84yyvdxgyjeho4m1lkwdtufhqo4w4iv7xuq2ghj6me5z5zuvvfym4lj5ujmr1u679o5f4vaq42rguov6ytcah3vah3s92ibk8dkjx88et8adgocohxbdevpe4faknh90s7uxh30isbpamy3fdmtglmsrkb87ysimup0boi6lxz793swze1u2n0io6qopis8u36o3wdn1i50h8cdl3oek9n4a8ftv0wqyq08upsq43famtv1aj2o2dyuowq68zixmhj1sumzebk3mbxcyevl1w59aseyonxptf6iswjn33zfe8398fvhglnfiu2hophoxlv6mlcrbjhl1sffv168leb18m70ncvbhbg5bnojmxuqia8u06zl7pqy6wwl9u0bjl4l7piljlqp38yc0z5pzvib97q7jsm8ksaca0orbh9th92n7xvuu6ysp9206m4rnpd09a4x3wa1ijb4f31jp5f21emkgp1ncvk0iiezbv24g55u66dit5toc28s6pptiginrwrhgqryg4l5evozoj7oohogdg5sdpae5t9mx4m9t25j1m 00:06:49.970 06:37:03 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:49.970 06:37:03 -- dd/basic_rw.sh@59 -- # gen_conf 00:06:49.970 06:37:03 -- dd/common.sh@31 -- # xtrace_disable 00:06:49.970 06:37:03 -- common/autotest_common.sh@10 -- # set +x 00:06:49.970 [2024-12-14 06:37:03.805092] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:49.970 [2024-12-14 06:37:03.805197] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58000 ] 00:06:49.970 { 00:06:49.970 "subsystems": [ 00:06:49.970 { 00:06:49.970 "subsystem": "bdev", 00:06:49.970 "config": [ 00:06:49.970 { 00:06:49.970 "params": { 00:06:49.970 "trtype": "pcie", 00:06:49.970 "traddr": "0000:00:06.0", 00:06:49.970 "name": "Nvme0" 00:06:49.970 }, 00:06:49.970 "method": "bdev_nvme_attach_controller" 00:06:49.970 }, 00:06:49.970 { 00:06:49.970 "method": "bdev_wait_for_examine" 00:06:49.970 } 00:06:49.970 ] 00:06:49.970 } 00:06:49.970 ] 00:06:49.970 } 00:06:49.970 [2024-12-14 06:37:03.942926] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.229 [2024-12-14 06:37:03.994253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.229  [2024-12-14T06:37:04.480Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:50.488 00:06:50.488 06:37:04 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:50.488 06:37:04 -- dd/basic_rw.sh@65 -- # gen_conf 00:06:50.488 06:37:04 -- dd/common.sh@31 -- # xtrace_disable 00:06:50.488 06:37:04 -- common/autotest_common.sh@10 -- # set +x 00:06:50.488 [2024-12-14 06:37:04.327599] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.488 [2024-12-14 06:37:04.327694] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58018 ] 00:06:50.488 { 00:06:50.488 "subsystems": [ 00:06:50.488 { 00:06:50.488 "subsystem": "bdev", 00:06:50.488 "config": [ 00:06:50.488 { 00:06:50.488 "params": { 00:06:50.488 "trtype": "pcie", 00:06:50.488 "traddr": "0000:00:06.0", 00:06:50.488 "name": "Nvme0" 00:06:50.488 }, 00:06:50.488 "method": "bdev_nvme_attach_controller" 00:06:50.488 }, 00:06:50.488 { 00:06:50.488 "method": "bdev_wait_for_examine" 00:06:50.488 } 00:06:50.488 ] 00:06:50.488 } 00:06:50.488 ] 00:06:50.488 } 00:06:50.488 [2024-12-14 06:37:04.464683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.747 [2024-12-14 06:37:04.516418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.747  [2024-12-14T06:37:04.998Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:51.006 00:06:51.006 06:37:04 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:51.007 06:37:04 -- dd/basic_rw.sh@72 -- # [[ vszkjsn4gxetssn98120xitskrvq6z09qkn8me2aeyll4ktqdqw531m9uo07byx5jrh02pzd7hp1bxlh0itj79waadykysqx4wfuo7jf17skihuexoupvo94fjldqpvfq3r45b6941hx8d6shlzxfgwuz34sikkjpn3t2snknlx9did1gkie60tho6zc61ugo2qmwgaqxt7yurj14wsssch3juqcodnb7r2l9kjbpsr0plnk96nkbx4vc2kf3seuejoiic7pafljwmkrhip6z90546r6zkavd71vtfmj8vc6ms7x9uas9t86fvt9m21qorxy4w6v01ngzt08ui8kmh59d7fd4k3y9inzml0xheapbj4j1bjisu1a8mnpd54a6qmmjmsw3eazuyq5nad0m6exusrb5aw5ayr5nin52g78li92638u2jm6c5gu4j9gh9cmd270tluehz13jldgznevv70cptxmhh1bp4hcw9m4qtxqsgka4s9lio507xxgkrkjmgjvas28n5xu6m2u9r97nqe4ovul7qu5mur2p59ogukz1z7a5pyl7pvdlbyvxwum9fszpw3hsp12b0afd4uup8k4welpm31peot3vrtptf6detbrtx91vrk0ks2fwu5z08kz3ou7y03y9apl163g0epa7hwrukwgyukra3gukr34fz9mvej99mrzvde05ilshpmlhl25xn6w8b1ug5jljkkg2so6j00exyvsqs6swr2zxaxxqvzw3ejzhcj17g62phmd6gnu0bct2yj5gw3z4hnvf8yp2gezg9tcwe5mj9ow8g1mhi6nzx1se0dfvy51vroo88o9pyz6zuz9zu5gkv0ajztqgl5acqrr2pdojbtoxk7h68wyw1z1w7nsmhufhc3liht94l7irkq3egdp1nbhka6sojnpkbwaiht98b470vj34oh3rkgbkefhegzttccjwxomj9vzcau9lzw9didk2h4nj35xuyoosstxjc0b042ac8drsrzc6ehyu2yrpa701kwpuow7d6dcsf8keoq32bvskn4ecm8xv1faordxg71a8vceyy99etxhv7tkfr32flysxaw3bxkyjjx22gagpkqjgx2z0gsesadcoq7mihh7wstrj3m7nh4gkeqvd7s2xcmbghrgqauuk5r0538dm1ks730lb2ucn4d7o77tqd4jmqk16mbwcxyhw5ozctwwkltyd30a2gggx6zrowbvsneoqyj0g16atu6wmcndt81d7cpfdhxt63y7h42cz1r73oq9ph2rsvjlkkama1skodlbxzbf94abg9aj7ut3dc8gy7ea8youuv72h9yobpsyx6v0q27gybwcbce8o3vlwfixpmwt6i8nsu6a22obhibel24wzmf71mku6wsur2psi54n463nhiys28q3fyyb6t57ta6s9abxvtfq5qtjpugjhhnqt0pxiy61ei645dqqoxchvpsm1p9sgp30ln2e4h04srhfntijf23muur3i72w82tc60lwpg1xyy8kjihh04qqa3cocv5im4u9joyswh8nmffv89hf8mfc20ij1jhfy6axwb5o36m1tzny1rfjk5uj4txwjv8tmcap4ow7jlroy2yl8jwzw74umkn2pz2cfru1thskznbxa9ks0s4yqf2qbnix0h8fiy0u6j02uupkd462n98z27ajf2il5sk8nerucviy5tdftu6e647a4z9vo9b25ns8cvi6rqvwnmvjbshcbenovychl5d0depwvkcyazi5fwqcdobd6dtz5p2cryducrpypeeq5ku397cu72njv8gabjclyfa9vz9ct3pgams31pq8cl0vxohzd22w1sfhlbzzay2fuu5bonwydse0dudk5pq33zn07jzhtzq93p0jag36xk3ncxyni0rsgnbrzjh3a23zihxfshdripgsgmfo9qc2e06a4atmmzz47sjl0wl5zquk1mqzaa6i7gp2rzpjvvsapxjwtkj39ib7rijowipm1ymuyffiifxad3qrhbf7mmjz62ggolpdzis9qbv38ih81917v13rf5sd0atw7dl0lzbvmgzqsae2fjggh2ppu7masi5enmyw4xhfgl3yraulad9vc1qonhfln619sl1o4y2akwy44mhda6dlb698a7dber6j1t8392gwea0hgmwk3ti350875c8tbk4jcv0rfls693mpk1cauxlf14bdsu5h2ulgs2v03hi5ipqp0qlidgqv19f41g4eg3agpslu7gt0jqvvyun6wi13z92p4zki8wbcj3mo8qf4r12noh9z3xzzs4ptqrzblqw27nn818eapz18vm2hnjobmpx6i3o4mzbzicq4a9dxao3f9yusdontfcv12dba879eu8zz786m8zcupyduwchqvp4pj02fyyiiz741t83c5iti2ua60cnvfxb8axoceuvd0922fzr138qiu2iwkqmrv3re6ta5gb72xyv2rvglrtvnee2ocyff5ew5heaec467i6hjq88r1qulypawxlhewu0k7846tdegomzcuu2xza681cp2k04hsk44gyxv72yatiygkvdjdnk66bmng0ev79rvz6go2iqt9dgl2m00iz0aka37l5ms8qcfyxtenurv6ktwps6vsl6e3kgs26nstt9s6wi9o4tf4fv4zhbnq7b9cqqjum4576qgjwozq2b9nclkjp64m34cd4yofg6fk3nn9i1vsk7k55tgwk18srrv5sxqmyhyg3df8cg65rs7uo6ckow8el7h54a95ouxqy7blup9vqg7vem1tzvxly78nlxk7hqtb7ib2qv0tcav2y6xtwsxsez84xkkgd9vq1rhwmj3thuyzhg9506eds7k5ad0dmoh4db3i6ycjcxhupqsit2pxk6gc9j2rn7y6jjcy2e7bgp201del0oot7j1u4pkr4vskilrp3uutonceal3t6i43nl21vd5x76wmwc6ajocazgswkqtriofhsu8t9dkszsl2wft3l3couuibb3dsf8xn2m5qy141xqp6768d2wkw5peatzpher9z3gpulprgxdm3y4br18c537zt2rhk0qx9yxhx041q66s7f19lzzadjyy51el3swqxq73u0yebd9joibl4m8mchiqm43f5u3i5sson6gel6lpiv3m40cx0245nqsrnexb6c3vr1uzj4ot3fsced8gjkg7y1h1zze2x9ptex3ewrvdv3hav8666robmm72846pnjvns0ok4me557hcdsxs5y91nn8tgxpa6v3vpyw2qdetraq44qjw0t6djo9a0ek0mziztv6u80hdciterk8d8qjedfhiltxk6gbr3q3ovwd0sq6iv3oajrwi49fxg97r2znrwdz9sd53028kfx2iq26i1rc6zkztt838pjecn4jbavczbu3pqd5utp0aun2l5ssyadrieja0wgdtbplr1ir0h1pcjfaxtr9nb6lorrtu6jys3zwmuphi669bu2h84yyvdxgyjeho4m1lkwdtufhqo4w4iv7xuq2ghj6me5z5zuvvfym4lj5ujmr1u679o5f4vaq42rguov6ytcah3vah3s92ibk8dkjx88et8adgocohxbdevpe4faknh90s7uxh30isbpamy3fdmtglmsrkb87ysimup0boi6lxz793swze1u2n0io6qopis8u36o3wdn1i50h8cdl3oek9n4a8ftv0wqyq08upsq43famtv1aj2o2dyuowq68zixmhj1sumzebk3mbxcyevl1w59aseyonxptf6iswjn33zfe8398fvhglnfiu2hophoxlv6mlcrbjhl1sffv168leb18m70ncvbhbg5bnojmxuqia8u06zl7pqy6wwl9u0bjl4l7piljlqp38yc0z5pzvib97q7jsm8ksaca0orbh9th92n7xvuu6ysp9206m4rnpd09a4x3wa1ijb4f31jp5f21emkgp1ncvk0iiezbv24g55u66dit5toc28s6pptiginrwrhgqryg4l5evozoj7oohogdg5sdpae5t9mx4m9t25j1m == \v\s\z\k\j\s\n\4\g\x\e\t\s\s\n\9\8\1\2\0\x\i\t\s\k\r\v\q\6\z\0\9\q\k\n\8\m\e\2\a\e\y\l\l\4\k\t\q\d\q\w\5\3\1\m\9\u\o\0\7\b\y\x\5\j\r\h\0\2\p\z\d\7\h\p\1\b\x\l\h\0\i\t\j\7\9\w\a\a\d\y\k\y\s\q\x\4\w\f\u\o\7\j\f\1\7\s\k\i\h\u\e\x\o\u\p\v\o\9\4\f\j\l\d\q\p\v\f\q\3\r\4\5\b\6\9\4\1\h\x\8\d\6\s\h\l\z\x\f\g\w\u\z\3\4\s\i\k\k\j\p\n\3\t\2\s\n\k\n\l\x\9\d\i\d\1\g\k\i\e\6\0\t\h\o\6\z\c\6\1\u\g\o\2\q\m\w\g\a\q\x\t\7\y\u\r\j\1\4\w\s\s\s\c\h\3\j\u\q\c\o\d\n\b\7\r\2\l\9\k\j\b\p\s\r\0\p\l\n\k\9\6\n\k\b\x\4\v\c\2\k\f\3\s\e\u\e\j\o\i\i\c\7\p\a\f\l\j\w\m\k\r\h\i\p\6\z\9\0\5\4\6\r\6\z\k\a\v\d\7\1\v\t\f\m\j\8\v\c\6\m\s\7\x\9\u\a\s\9\t\8\6\f\v\t\9\m\2\1\q\o\r\x\y\4\w\6\v\0\1\n\g\z\t\0\8\u\i\8\k\m\h\5\9\d\7\f\d\4\k\3\y\9\i\n\z\m\l\0\x\h\e\a\p\b\j\4\j\1\b\j\i\s\u\1\a\8\m\n\p\d\5\4\a\6\q\m\m\j\m\s\w\3\e\a\z\u\y\q\5\n\a\d\0\m\6\e\x\u\s\r\b\5\a\w\5\a\y\r\5\n\i\n\5\2\g\7\8\l\i\9\2\6\3\8\u\2\j\m\6\c\5\g\u\4\j\9\g\h\9\c\m\d\2\7\0\t\l\u\e\h\z\1\3\j\l\d\g\z\n\e\v\v\7\0\c\p\t\x\m\h\h\1\b\p\4\h\c\w\9\m\4\q\t\x\q\s\g\k\a\4\s\9\l\i\o\5\0\7\x\x\g\k\r\k\j\m\g\j\v\a\s\2\8\n\5\x\u\6\m\2\u\9\r\9\7\n\q\e\4\o\v\u\l\7\q\u\5\m\u\r\2\p\5\9\o\g\u\k\z\1\z\7\a\5\p\y\l\7\p\v\d\l\b\y\v\x\w\u\m\9\f\s\z\p\w\3\h\s\p\1\2\b\0\a\f\d\4\u\u\p\8\k\4\w\e\l\p\m\3\1\p\e\o\t\3\v\r\t\p\t\f\6\d\e\t\b\r\t\x\9\1\v\r\k\0\k\s\2\f\w\u\5\z\0\8\k\z\3\o\u\7\y\0\3\y\9\a\p\l\1\6\3\g\0\e\p\a\7\h\w\r\u\k\w\g\y\u\k\r\a\3\g\u\k\r\3\4\f\z\9\m\v\e\j\9\9\m\r\z\v\d\e\0\5\i\l\s\h\p\m\l\h\l\2\5\x\n\6\w\8\b\1\u\g\5\j\l\j\k\k\g\2\s\o\6\j\0\0\e\x\y\v\s\q\s\6\s\w\r\2\z\x\a\x\x\q\v\z\w\3\e\j\z\h\c\j\1\7\g\6\2\p\h\m\d\6\g\n\u\0\b\c\t\2\y\j\5\g\w\3\z\4\h\n\v\f\8\y\p\2\g\e\z\g\9\t\c\w\e\5\m\j\9\o\w\8\g\1\m\h\i\6\n\z\x\1\s\e\0\d\f\v\y\5\1\v\r\o\o\8\8\o\9\p\y\z\6\z\u\z\9\z\u\5\g\k\v\0\a\j\z\t\q\g\l\5\a\c\q\r\r\2\p\d\o\j\b\t\o\x\k\7\h\6\8\w\y\w\1\z\1\w\7\n\s\m\h\u\f\h\c\3\l\i\h\t\9\4\l\7\i\r\k\q\3\e\g\d\p\1\n\b\h\k\a\6\s\o\j\n\p\k\b\w\a\i\h\t\9\8\b\4\7\0\v\j\3\4\o\h\3\r\k\g\b\k\e\f\h\e\g\z\t\t\c\c\j\w\x\o\m\j\9\v\z\c\a\u\9\l\z\w\9\d\i\d\k\2\h\4\n\j\3\5\x\u\y\o\o\s\s\t\x\j\c\0\b\0\4\2\a\c\8\d\r\s\r\z\c\6\e\h\y\u\2\y\r\p\a\7\0\1\k\w\p\u\o\w\7\d\6\d\c\s\f\8\k\e\o\q\3\2\b\v\s\k\n\4\e\c\m\8\x\v\1\f\a\o\r\d\x\g\7\1\a\8\v\c\e\y\y\9\9\e\t\x\h\v\7\t\k\f\r\3\2\f\l\y\s\x\a\w\3\b\x\k\y\j\j\x\2\2\g\a\g\p\k\q\j\g\x\2\z\0\g\s\e\s\a\d\c\o\q\7\m\i\h\h\7\w\s\t\r\j\3\m\7\n\h\4\g\k\e\q\v\d\7\s\2\x\c\m\b\g\h\r\g\q\a\u\u\k\5\r\0\5\3\8\d\m\1\k\s\7\3\0\l\b\2\u\c\n\4\d\7\o\7\7\t\q\d\4\j\m\q\k\1\6\m\b\w\c\x\y\h\w\5\o\z\c\t\w\w\k\l\t\y\d\3\0\a\2\g\g\g\x\6\z\r\o\w\b\v\s\n\e\o\q\y\j\0\g\1\6\a\t\u\6\w\m\c\n\d\t\8\1\d\7\c\p\f\d\h\x\t\6\3\y\7\h\4\2\c\z\1\r\7\3\o\q\9\p\h\2\r\s\v\j\l\k\k\a\m\a\1\s\k\o\d\l\b\x\z\b\f\9\4\a\b\g\9\a\j\7\u\t\3\d\c\8\g\y\7\e\a\8\y\o\u\u\v\7\2\h\9\y\o\b\p\s\y\x\6\v\0\q\2\7\g\y\b\w\c\b\c\e\8\o\3\v\l\w\f\i\x\p\m\w\t\6\i\8\n\s\u\6\a\2\2\o\b\h\i\b\e\l\2\4\w\z\m\f\7\1\m\k\u\6\w\s\u\r\2\p\s\i\5\4\n\4\6\3\n\h\i\y\s\2\8\q\3\f\y\y\b\6\t\5\7\t\a\6\s\9\a\b\x\v\t\f\q\5\q\t\j\p\u\g\j\h\h\n\q\t\0\p\x\i\y\6\1\e\i\6\4\5\d\q\q\o\x\c\h\v\p\s\m\1\p\9\s\g\p\3\0\l\n\2\e\4\h\0\4\s\r\h\f\n\t\i\j\f\2\3\m\u\u\r\3\i\7\2\w\8\2\t\c\6\0\l\w\p\g\1\x\y\y\8\k\j\i\h\h\0\4\q\q\a\3\c\o\c\v\5\i\m\4\u\9\j\o\y\s\w\h\8\n\m\f\f\v\8\9\h\f\8\m\f\c\2\0\i\j\1\j\h\f\y\6\a\x\w\b\5\o\3\6\m\1\t\z\n\y\1\r\f\j\k\5\u\j\4\t\x\w\j\v\8\t\m\c\a\p\4\o\w\7\j\l\r\o\y\2\y\l\8\j\w\z\w\7\4\u\m\k\n\2\p\z\2\c\f\r\u\1\t\h\s\k\z\n\b\x\a\9\k\s\0\s\4\y\q\f\2\q\b\n\i\x\0\h\8\f\i\y\0\u\6\j\0\2\u\u\p\k\d\4\6\2\n\9\8\z\2\7\a\j\f\2\i\l\5\s\k\8\n\e\r\u\c\v\i\y\5\t\d\f\t\u\6\e\6\4\7\a\4\z\9\v\o\9\b\2\5\n\s\8\c\v\i\6\r\q\v\w\n\m\v\j\b\s\h\c\b\e\n\o\v\y\c\h\l\5\d\0\d\e\p\w\v\k\c\y\a\z\i\5\f\w\q\c\d\o\b\d\6\d\t\z\5\p\2\c\r\y\d\u\c\r\p\y\p\e\e\q\5\k\u\3\9\7\c\u\7\2\n\j\v\8\g\a\b\j\c\l\y\f\a\9\v\z\9\c\t\3\p\g\a\m\s\3\1\p\q\8\c\l\0\v\x\o\h\z\d\2\2\w\1\s\f\h\l\b\z\z\a\y\2\f\u\u\5\b\o\n\w\y\d\s\e\0\d\u\d\k\5\p\q\3\3\z\n\0\7\j\z\h\t\z\q\9\3\p\0\j\a\g\3\6\x\k\3\n\c\x\y\n\i\0\r\s\g\n\b\r\z\j\h\3\a\2\3\z\i\h\x\f\s\h\d\r\i\p\g\s\g\m\f\o\9\q\c\2\e\0\6\a\4\a\t\m\m\z\z\4\7\s\j\l\0\w\l\5\z\q\u\k\1\m\q\z\a\a\6\i\7\g\p\2\r\z\p\j\v\v\s\a\p\x\j\w\t\k\j\3\9\i\b\7\r\i\j\o\w\i\p\m\1\y\m\u\y\f\f\i\i\f\x\a\d\3\q\r\h\b\f\7\m\m\j\z\6\2\g\g\o\l\p\d\z\i\s\9\q\b\v\3\8\i\h\8\1\9\1\7\v\1\3\r\f\5\s\d\0\a\t\w\7\d\l\0\l\z\b\v\m\g\z\q\s\a\e\2\f\j\g\g\h\2\p\p\u\7\m\a\s\i\5\e\n\m\y\w\4\x\h\f\g\l\3\y\r\a\u\l\a\d\9\v\c\1\q\o\n\h\f\l\n\6\1\9\s\l\1\o\4\y\2\a\k\w\y\4\4\m\h\d\a\6\d\l\b\6\9\8\a\7\d\b\e\r\6\j\1\t\8\3\9\2\g\w\e\a\0\h\g\m\w\k\3\t\i\3\5\0\8\7\5\c\8\t\b\k\4\j\c\v\0\r\f\l\s\6\9\3\m\p\k\1\c\a\u\x\l\f\1\4\b\d\s\u\5\h\2\u\l\g\s\2\v\0\3\h\i\5\i\p\q\p\0\q\l\i\d\g\q\v\1\9\f\4\1\g\4\e\g\3\a\g\p\s\l\u\7\g\t\0\j\q\v\v\y\u\n\6\w\i\1\3\z\9\2\p\4\z\k\i\8\w\b\c\j\3\m\o\8\q\f\4\r\1\2\n\o\h\9\z\3\x\z\z\s\4\p\t\q\r\z\b\l\q\w\2\7\n\n\8\1\8\e\a\p\z\1\8\v\m\2\h\n\j\o\b\m\p\x\6\i\3\o\4\m\z\b\z\i\c\q\4\a\9\d\x\a\o\3\f\9\y\u\s\d\o\n\t\f\c\v\1\2\d\b\a\8\7\9\e\u\8\z\z\7\8\6\m\8\z\c\u\p\y\d\u\w\c\h\q\v\p\4\p\j\0\2\f\y\y\i\i\z\7\4\1\t\8\3\c\5\i\t\i\2\u\a\6\0\c\n\v\f\x\b\8\a\x\o\c\e\u\v\d\0\9\2\2\f\z\r\1\3\8\q\i\u\2\i\w\k\q\m\r\v\3\r\e\6\t\a\5\g\b\7\2\x\y\v\2\r\v\g\l\r\t\v\n\e\e\2\o\c\y\f\f\5\e\w\5\h\e\a\e\c\4\6\7\i\6\h\j\q\8\8\r\1\q\u\l\y\p\a\w\x\l\h\e\w\u\0\k\7\8\4\6\t\d\e\g\o\m\z\c\u\u\2\x\z\a\6\8\1\c\p\2\k\0\4\h\s\k\4\4\g\y\x\v\7\2\y\a\t\i\y\g\k\v\d\j\d\n\k\6\6\b\m\n\g\0\e\v\7\9\r\v\z\6\g\o\2\i\q\t\9\d\g\l\2\m\0\0\i\z\0\a\k\a\3\7\l\5\m\s\8\q\c\f\y\x\t\e\n\u\r\v\6\k\t\w\p\s\6\v\s\l\6\e\3\k\g\s\2\6\n\s\t\t\9\s\6\w\i\9\o\4\t\f\4\f\v\4\z\h\b\n\q\7\b\9\c\q\q\j\u\m\4\5\7\6\q\g\j\w\o\z\q\2\b\9\n\c\l\k\j\p\6\4\m\3\4\c\d\4\y\o\f\g\6\f\k\3\n\n\9\i\1\v\s\k\7\k\5\5\t\g\w\k\1\8\s\r\r\v\5\s\x\q\m\y\h\y\g\3\d\f\8\c\g\6\5\r\s\7\u\o\6\c\k\o\w\8\e\l\7\h\5\4\a\9\5\o\u\x\q\y\7\b\l\u\p\9\v\q\g\7\v\e\m\1\t\z\v\x\l\y\7\8\n\l\x\k\7\h\q\t\b\7\i\b\2\q\v\0\t\c\a\v\2\y\6\x\t\w\s\x\s\e\z\8\4\x\k\k\g\d\9\v\q\1\r\h\w\m\j\3\t\h\u\y\z\h\g\9\5\0\6\e\d\s\7\k\5\a\d\0\d\m\o\h\4\d\b\3\i\6\y\c\j\c\x\h\u\p\q\s\i\t\2\p\x\k\6\g\c\9\j\2\r\n\7\y\6\j\j\c\y\2\e\7\b\g\p\2\0\1\d\e\l\0\o\o\t\7\j\1\u\4\p\k\r\4\v\s\k\i\l\r\p\3\u\u\t\o\n\c\e\a\l\3\t\6\i\4\3\n\l\2\1\v\d\5\x\7\6\w\m\w\c\6\a\j\o\c\a\z\g\s\w\k\q\t\r\i\o\f\h\s\u\8\t\9\d\k\s\z\s\l\2\w\f\t\3\l\3\c\o\u\u\i\b\b\3\d\s\f\8\x\n\2\m\5\q\y\1\4\1\x\q\p\6\7\6\8\d\2\w\k\w\5\p\e\a\t\z\p\h\e\r\9\z\3\g\p\u\l\p\r\g\x\d\m\3\y\4\b\r\1\8\c\5\3\7\z\t\2\r\h\k\0\q\x\9\y\x\h\x\0\4\1\q\6\6\s\7\f\1\9\l\z\z\a\d\j\y\y\5\1\e\l\3\s\w\q\x\q\7\3\u\0\y\e\b\d\9\j\o\i\b\l\4\m\8\m\c\h\i\q\m\4\3\f\5\u\3\i\5\s\s\o\n\6\g\e\l\6\l\p\i\v\3\m\4\0\c\x\0\2\4\5\n\q\s\r\n\e\x\b\6\c\3\v\r\1\u\z\j\4\o\t\3\f\s\c\e\d\8\g\j\k\g\7\y\1\h\1\z\z\e\2\x\9\p\t\e\x\3\e\w\r\v\d\v\3\h\a\v\8\6\6\6\r\o\b\m\m\7\2\8\4\6\p\n\j\v\n\s\0\o\k\4\m\e\5\5\7\h\c\d\s\x\s\5\y\9\1\n\n\8\t\g\x\p\a\6\v\3\v\p\y\w\2\q\d\e\t\r\a\q\4\4\q\j\w\0\t\6\d\j\o\9\a\0\e\k\0\m\z\i\z\t\v\6\u\8\0\h\d\c\i\t\e\r\k\8\d\8\q\j\e\d\f\h\i\l\t\x\k\6\g\b\r\3\q\3\o\v\w\d\0\s\q\6\i\v\3\o\a\j\r\w\i\4\9\f\x\g\9\7\r\2\z\n\r\w\d\z\9\s\d\5\3\0\2\8\k\f\x\2\i\q\2\6\i\1\r\c\6\z\k\z\t\t\8\3\8\p\j\e\c\n\4\j\b\a\v\c\z\b\u\3\p\q\d\5\u\t\p\0\a\u\n\2\l\5\s\s\y\a\d\r\i\e\j\a\0\w\g\d\t\b\p\l\r\1\i\r\0\h\1\p\c\j\f\a\x\t\r\9\n\b\6\l\o\r\r\t\u\6\j\y\s\3\z\w\m\u\p\h\i\6\6\9\b\u\2\h\8\4\y\y\v\d\x\g\y\j\e\h\o\4\m\1\l\k\w\d\t\u\f\h\q\o\4\w\4\i\v\7\x\u\q\2\g\h\j\6\m\e\5\z\5\z\u\v\v\f\y\m\4\l\j\5\u\j\m\r\1\u\6\7\9\o\5\f\4\v\a\q\4\2\r\g\u\o\v\6\y\t\c\a\h\3\v\a\h\3\s\9\2\i\b\k\8\d\k\j\x\8\8\e\t\8\a\d\g\o\c\o\h\x\b\d\e\v\p\e\4\f\a\k\n\h\9\0\s\7\u\x\h\3\0\i\s\b\p\a\m\y\3\f\d\m\t\g\l\m\s\r\k\b\8\7\y\s\i\m\u\p\0\b\o\i\6\l\x\z\7\9\3\s\w\z\e\1\u\2\n\0\i\o\6\q\o\p\i\s\8\u\3\6\o\3\w\d\n\1\i\5\0\h\8\c\d\l\3\o\e\k\9\n\4\a\8\f\t\v\0\w\q\y\q\0\8\u\p\s\q\4\3\f\a\m\t\v\1\a\j\2\o\2\d\y\u\o\w\q\6\8\z\i\x\m\h\j\1\s\u\m\z\e\b\k\3\m\b\x\c\y\e\v\l\1\w\5\9\a\s\e\y\o\n\x\p\t\f\6\i\s\w\j\n\3\3\z\f\e\8\3\9\8\f\v\h\g\l\n\f\i\u\2\h\o\p\h\o\x\l\v\6\m\l\c\r\b\j\h\l\1\s\f\f\v\1\6\8\l\e\b\1\8\m\7\0\n\c\v\b\h\b\g\5\b\n\o\j\m\x\u\q\i\a\8\u\0\6\z\l\7\p\q\y\6\w\w\l\9\u\0\b\j\l\4\l\7\p\i\l\j\l\q\p\3\8\y\c\0\z\5\p\z\v\i\b\9\7\q\7\j\s\m\8\k\s\a\c\a\0\o\r\b\h\9\t\h\9\2\n\7\x\v\u\u\6\y\s\p\9\2\0\6\m\4\r\n\p\d\0\9\a\4\x\3\w\a\1\i\j\b\4\f\3\1\j\p\5\f\2\1\e\m\k\g\p\1\n\c\v\k\0\i\i\e\z\b\v\2\4\g\5\5\u\6\6\d\i\t\5\t\o\c\2\8\s\6\p\p\t\i\g\i\n\r\w\r\h\g\q\r\y\g\4\l\5\e\v\o\z\o\j\7\o\o\h\o\g\d\g\5\s\d\p\a\e\5\t\9\m\x\4\m\9\t\2\5\j\1\m ]] 00:06:51.007 00:06:51.007 real 0m1.096s 00:06:51.007 user 0m0.773s 00:06:51.007 sys 0m0.209s 00:06:51.007 06:37:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:51.007 06:37:04 -- common/autotest_common.sh@10 -- # set +x 00:06:51.007 ************************************ 00:06:51.007 END TEST dd_rw_offset 00:06:51.007 ************************************ 00:06:51.007 06:37:04 -- dd/basic_rw.sh@1 -- # cleanup 00:06:51.007 06:37:04 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:51.007 06:37:04 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:51.007 06:37:04 -- dd/common.sh@11 -- # local nvme_ref= 00:06:51.007 06:37:04 -- dd/common.sh@12 -- # local size=0xffff 00:06:51.007 06:37:04 -- dd/common.sh@14 -- # local bs=1048576 00:06:51.007 06:37:04 -- dd/common.sh@15 -- # local count=1 00:06:51.007 06:37:04 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:51.007 06:37:04 -- dd/common.sh@18 -- # gen_conf 00:06:51.007 06:37:04 -- dd/common.sh@31 -- # xtrace_disable 00:06:51.007 06:37:04 -- common/autotest_common.sh@10 -- # set +x 00:06:51.007 [2024-12-14 06:37:04.887912] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:51.007 [2024-12-14 06:37:04.888006] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58040 ] 00:06:51.007 { 00:06:51.007 "subsystems": [ 00:06:51.007 { 00:06:51.007 "subsystem": "bdev", 00:06:51.007 "config": [ 00:06:51.007 { 00:06:51.007 "params": { 00:06:51.007 "trtype": "pcie", 00:06:51.007 "traddr": "0000:00:06.0", 00:06:51.007 "name": "Nvme0" 00:06:51.007 }, 00:06:51.007 "method": "bdev_nvme_attach_controller" 00:06:51.007 }, 00:06:51.007 { 00:06:51.007 "method": "bdev_wait_for_examine" 00:06:51.007 } 00:06:51.007 ] 00:06:51.007 } 00:06:51.007 ] 00:06:51.007 } 00:06:51.271 [2024-12-14 06:37:05.026106] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.271 [2024-12-14 06:37:05.078261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.271  [2024-12-14T06:37:05.523Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:51.531 00:06:51.531 06:37:05 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:51.531 00:06:51.531 real 0m15.186s 00:06:51.531 user 0m11.080s 00:06:51.531 sys 0m2.723s 00:06:51.531 06:37:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:51.531 06:37:05 -- common/autotest_common.sh@10 -- # set +x 00:06:51.531 ************************************ 00:06:51.531 END TEST spdk_dd_basic_rw 00:06:51.531 ************************************ 00:06:51.531 06:37:05 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:51.531 06:37:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:51.531 06:37:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.531 06:37:05 -- common/autotest_common.sh@10 -- # set +x 00:06:51.531 ************************************ 00:06:51.531 START TEST spdk_dd_posix 00:06:51.531 ************************************ 00:06:51.531 06:37:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:51.531 * Looking for test storage... 00:06:51.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:51.531 06:37:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:51.531 06:37:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:51.531 06:37:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:51.790 06:37:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:51.790 06:37:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:51.790 06:37:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:51.790 06:37:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:51.790 06:37:05 -- scripts/common.sh@335 -- # IFS=.-: 00:06:51.790 06:37:05 -- scripts/common.sh@335 -- # read -ra ver1 00:06:51.790 06:37:05 -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.790 06:37:05 -- scripts/common.sh@336 -- # read -ra ver2 00:06:51.790 06:37:05 -- scripts/common.sh@337 -- # local 'op=<' 00:06:51.790 06:37:05 -- scripts/common.sh@339 -- # ver1_l=2 00:06:51.790 06:37:05 -- scripts/common.sh@340 -- # ver2_l=1 00:06:51.790 06:37:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:51.790 06:37:05 -- scripts/common.sh@343 -- # case "$op" in 00:06:51.790 06:37:05 -- scripts/common.sh@344 -- # : 1 00:06:51.790 06:37:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:51.790 06:37:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.790 06:37:05 -- scripts/common.sh@364 -- # decimal 1 00:06:51.790 06:37:05 -- scripts/common.sh@352 -- # local d=1 00:06:51.790 06:37:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.790 06:37:05 -- scripts/common.sh@354 -- # echo 1 00:06:51.790 06:37:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:51.790 06:37:05 -- scripts/common.sh@365 -- # decimal 2 00:06:51.790 06:37:05 -- scripts/common.sh@352 -- # local d=2 00:06:51.790 06:37:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.790 06:37:05 -- scripts/common.sh@354 -- # echo 2 00:06:51.790 06:37:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:51.790 06:37:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:51.790 06:37:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:51.790 06:37:05 -- scripts/common.sh@367 -- # return 0 00:06:51.790 06:37:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.790 06:37:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:51.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.790 --rc genhtml_branch_coverage=1 00:06:51.790 --rc genhtml_function_coverage=1 00:06:51.790 --rc genhtml_legend=1 00:06:51.790 --rc geninfo_all_blocks=1 00:06:51.790 --rc geninfo_unexecuted_blocks=1 00:06:51.790 00:06:51.790 ' 00:06:51.790 06:37:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:51.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.790 --rc genhtml_branch_coverage=1 00:06:51.790 --rc genhtml_function_coverage=1 00:06:51.790 --rc genhtml_legend=1 00:06:51.790 --rc geninfo_all_blocks=1 00:06:51.790 --rc geninfo_unexecuted_blocks=1 00:06:51.790 00:06:51.790 ' 00:06:51.790 06:37:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:51.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.790 --rc genhtml_branch_coverage=1 00:06:51.790 --rc genhtml_function_coverage=1 00:06:51.790 --rc genhtml_legend=1 00:06:51.790 --rc geninfo_all_blocks=1 00:06:51.790 --rc geninfo_unexecuted_blocks=1 00:06:51.790 00:06:51.790 ' 00:06:51.790 06:37:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:51.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.790 --rc genhtml_branch_coverage=1 00:06:51.790 --rc genhtml_function_coverage=1 00:06:51.790 --rc genhtml_legend=1 00:06:51.790 --rc geninfo_all_blocks=1 00:06:51.790 --rc geninfo_unexecuted_blocks=1 00:06:51.790 00:06:51.790 ' 00:06:51.790 06:37:05 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:51.790 06:37:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:51.790 06:37:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.790 06:37:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.790 06:37:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.790 06:37:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.790 06:37:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.790 06:37:05 -- paths/export.sh@5 -- # export PATH 00:06:51.790 06:37:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.790 06:37:05 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:51.790 06:37:05 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:51.790 06:37:05 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:51.790 06:37:05 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:51.790 06:37:05 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:51.790 06:37:05 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:51.790 06:37:05 -- dd/posix.sh@130 -- # tests 00:06:51.790 06:37:05 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:51.790 * First test run, liburing in use 00:06:51.790 06:37:05 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:51.790 06:37:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:51.790 06:37:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.790 06:37:05 -- common/autotest_common.sh@10 -- # set +x 00:06:51.790 ************************************ 00:06:51.790 START TEST dd_flag_append 00:06:51.790 ************************************ 00:06:51.791 06:37:05 -- common/autotest_common.sh@1114 -- # append 00:06:51.791 06:37:05 -- dd/posix.sh@16 -- # local dump0 00:06:51.791 06:37:05 -- dd/posix.sh@17 -- # local dump1 00:06:51.791 06:37:05 -- dd/posix.sh@19 -- # gen_bytes 32 00:06:51.791 06:37:05 -- dd/common.sh@98 -- # xtrace_disable 00:06:51.791 06:37:05 -- common/autotest_common.sh@10 -- # set +x 00:06:51.791 06:37:05 -- dd/posix.sh@19 -- # dump0=now1fsef7rjldkx5fwyu2944rvsyujbf 00:06:51.791 06:37:05 -- dd/posix.sh@20 -- # gen_bytes 32 00:06:51.791 06:37:05 -- dd/common.sh@98 -- # xtrace_disable 00:06:51.791 06:37:05 -- common/autotest_common.sh@10 -- # set +x 00:06:51.791 06:37:05 -- dd/posix.sh@20 -- # dump1=7mlab9054jw53wm25rv3byh7rhhq203e 00:06:51.791 06:37:05 -- dd/posix.sh@22 -- # printf %s now1fsef7rjldkx5fwyu2944rvsyujbf 00:06:51.791 06:37:05 -- dd/posix.sh@23 -- # printf %s 7mlab9054jw53wm25rv3byh7rhhq203e 00:06:51.791 06:37:05 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:51.791 [2024-12-14 06:37:05.658869] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:51.791 [2024-12-14 06:37:05.659146] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58110 ] 00:06:52.049 [2024-12-14 06:37:05.790877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.049 [2024-12-14 06:37:05.838625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.049  [2024-12-14T06:37:06.300Z] Copying: 32/32 [B] (average 31 kBps) 00:06:52.308 00:06:52.308 ************************************ 00:06:52.308 END TEST dd_flag_append 00:06:52.308 ************************************ 00:06:52.308 06:37:06 -- dd/posix.sh@27 -- # [[ 7mlab9054jw53wm25rv3byh7rhhq203enow1fsef7rjldkx5fwyu2944rvsyujbf == \7\m\l\a\b\9\0\5\4\j\w\5\3\w\m\2\5\r\v\3\b\y\h\7\r\h\h\q\2\0\3\e\n\o\w\1\f\s\e\f\7\r\j\l\d\k\x\5\f\w\y\u\2\9\4\4\r\v\s\y\u\j\b\f ]] 00:06:52.308 00:06:52.308 real 0m0.452s 00:06:52.308 user 0m0.236s 00:06:52.308 sys 0m0.093s 00:06:52.308 06:37:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:52.308 06:37:06 -- common/autotest_common.sh@10 -- # set +x 00:06:52.308 06:37:06 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:52.308 06:37:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:52.308 06:37:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.308 06:37:06 -- common/autotest_common.sh@10 -- # set +x 00:06:52.308 ************************************ 00:06:52.308 START TEST dd_flag_directory 00:06:52.308 ************************************ 00:06:52.308 06:37:06 -- common/autotest_common.sh@1114 -- # directory 00:06:52.308 06:37:06 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:52.308 06:37:06 -- common/autotest_common.sh@650 -- # local es=0 00:06:52.308 06:37:06 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:52.308 06:37:06 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.308 06:37:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.308 06:37:06 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.308 06:37:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.308 06:37:06 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.308 06:37:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.308 06:37:06 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.308 06:37:06 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:52.308 06:37:06 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:52.308 [2024-12-14 06:37:06.161526] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.308 [2024-12-14 06:37:06.161643] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58137 ] 00:06:52.566 [2024-12-14 06:37:06.298998] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.566 [2024-12-14 06:37:06.346939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.566 [2024-12-14 06:37:06.389522] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:52.566 [2024-12-14 06:37:06.389607] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:52.566 [2024-12-14 06:37:06.389641] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.566 [2024-12-14 06:37:06.447704] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:52.566 06:37:06 -- common/autotest_common.sh@653 -- # es=236 00:06:52.566 06:37:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:52.566 06:37:06 -- common/autotest_common.sh@662 -- # es=108 00:06:52.566 06:37:06 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:52.566 06:37:06 -- common/autotest_common.sh@670 -- # es=1 00:06:52.566 06:37:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:52.566 06:37:06 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:52.566 06:37:06 -- common/autotest_common.sh@650 -- # local es=0 00:06:52.566 06:37:06 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:52.566 06:37:06 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.566 06:37:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.566 06:37:06 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.566 06:37:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.566 06:37:06 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.566 06:37:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.566 06:37:06 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.566 06:37:06 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:52.567 06:37:06 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:52.825 [2024-12-14 06:37:06.600476] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.825 [2024-12-14 06:37:06.600568] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58146 ] 00:06:52.825 [2024-12-14 06:37:06.736843] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.825 [2024-12-14 06:37:06.787254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.085 [2024-12-14 06:37:06.833343] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:53.085 [2024-12-14 06:37:06.833662] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:53.085 [2024-12-14 06:37:06.833699] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:53.085 [2024-12-14 06:37:06.891929] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:53.085 ************************************ 00:06:53.085 END TEST dd_flag_directory 00:06:53.085 ************************************ 00:06:53.085 06:37:06 -- common/autotest_common.sh@653 -- # es=236 00:06:53.085 06:37:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:53.085 06:37:06 -- common/autotest_common.sh@662 -- # es=108 00:06:53.085 06:37:06 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:53.085 06:37:06 -- common/autotest_common.sh@670 -- # es=1 00:06:53.085 06:37:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:53.085 00:06:53.085 real 0m0.877s 00:06:53.085 user 0m0.488s 00:06:53.085 sys 0m0.181s 00:06:53.085 06:37:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:53.085 06:37:06 -- common/autotest_common.sh@10 -- # set +x 00:06:53.085 06:37:07 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:53.085 06:37:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:53.085 06:37:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.085 06:37:07 -- common/autotest_common.sh@10 -- # set +x 00:06:53.085 ************************************ 00:06:53.085 START TEST dd_flag_nofollow 00:06:53.085 ************************************ 00:06:53.085 06:37:07 -- common/autotest_common.sh@1114 -- # nofollow 00:06:53.085 06:37:07 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:53.085 06:37:07 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:53.085 06:37:07 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:53.085 06:37:07 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:53.085 06:37:07 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:53.085 06:37:07 -- common/autotest_common.sh@650 -- # local es=0 00:06:53.085 06:37:07 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:53.085 06:37:07 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.085 06:37:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.085 06:37:07 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.085 06:37:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.085 06:37:07 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.085 06:37:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.085 06:37:07 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.085 06:37:07 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:53.085 06:37:07 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:53.345 [2024-12-14 06:37:07.095733] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:53.345 [2024-12-14 06:37:07.096025] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58169 ] 00:06:53.345 [2024-12-14 06:37:07.234036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.345 [2024-12-14 06:37:07.283388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.345 [2024-12-14 06:37:07.330045] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:53.345 [2024-12-14 06:37:07.330088] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:53.345 [2024-12-14 06:37:07.330117] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:53.604 [2024-12-14 06:37:07.389124] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:53.604 06:37:07 -- common/autotest_common.sh@653 -- # es=216 00:06:53.604 06:37:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:53.604 06:37:07 -- common/autotest_common.sh@662 -- # es=88 00:06:53.604 06:37:07 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:53.604 06:37:07 -- common/autotest_common.sh@670 -- # es=1 00:06:53.604 06:37:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:53.604 06:37:07 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:53.604 06:37:07 -- common/autotest_common.sh@650 -- # local es=0 00:06:53.604 06:37:07 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:53.604 06:37:07 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.604 06:37:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.604 06:37:07 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.604 06:37:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.604 06:37:07 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.604 06:37:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.604 06:37:07 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.604 06:37:07 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:53.604 06:37:07 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:53.604 [2024-12-14 06:37:07.533398] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:53.604 [2024-12-14 06:37:07.533489] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58186 ] 00:06:53.862 [2024-12-14 06:37:07.670422] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.862 [2024-12-14 06:37:07.718510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.862 [2024-12-14 06:37:07.761121] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:53.862 [2024-12-14 06:37:07.761422] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:53.862 [2024-12-14 06:37:07.761457] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:53.862 [2024-12-14 06:37:07.822021] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:06:54.121 06:37:07 -- common/autotest_common.sh@653 -- # es=216 00:06:54.121 06:37:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:54.121 06:37:07 -- common/autotest_common.sh@662 -- # es=88 00:06:54.121 06:37:07 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:54.121 06:37:07 -- common/autotest_common.sh@670 -- # es=1 00:06:54.121 06:37:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:54.121 06:37:07 -- dd/posix.sh@46 -- # gen_bytes 512 00:06:54.121 06:37:07 -- dd/common.sh@98 -- # xtrace_disable 00:06:54.121 06:37:07 -- common/autotest_common.sh@10 -- # set +x 00:06:54.121 06:37:07 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:54.121 [2024-12-14 06:37:07.974237] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:54.121 [2024-12-14 06:37:07.974333] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58188 ] 00:06:54.121 [2024-12-14 06:37:08.110473] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.380 [2024-12-14 06:37:08.164658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.380  [2024-12-14T06:37:08.631Z] Copying: 512/512 [B] (average 500 kBps) 00:06:54.639 00:06:54.639 ************************************ 00:06:54.639 END TEST dd_flag_nofollow 00:06:54.639 ************************************ 00:06:54.639 06:37:08 -- dd/posix.sh@49 -- # [[ zelmme5bl5rkvyn769njo45ntfg3kv073fxlpvr45beipxy5tys4kv01jy0hxk3clszgzrp8pmq1l1a7epsjwhvjm7qt011rt0dc9stel74wf74po75wb15g7mliflzpi014qik0ps32ab35wnuswowm6d4l4gbotkhdxhu8aiiooz5b5fk955vqt8fvetxsly262099or9fiq5goaoqz3nioiupgk3kizx24lkej9h8gnomwqx7mt3466f1im4729r2tojd4hpkuyw27qm0xoiqob5xwl4yzegd9t81hjvht7i8v1krsj9u3poqa8zsqp4zzwqkc0a7azvo42gzhsumynyu7k0krvt2pqoctfq2gflc16qhreftl8de9vhwaxzk4ogzwtlg4a0d7qzfx4p9e4xgau6vhs3eveu6mqxnsya8opeb8co06zy0y7j2rw17t1d7fczj21ctmcr6rj9u9rl6uaazg54192pisw5thcgss0wwvz0cxy8v8v5f == \z\e\l\m\m\e\5\b\l\5\r\k\v\y\n\7\6\9\n\j\o\4\5\n\t\f\g\3\k\v\0\7\3\f\x\l\p\v\r\4\5\b\e\i\p\x\y\5\t\y\s\4\k\v\0\1\j\y\0\h\x\k\3\c\l\s\z\g\z\r\p\8\p\m\q\1\l\1\a\7\e\p\s\j\w\h\v\j\m\7\q\t\0\1\1\r\t\0\d\c\9\s\t\e\l\7\4\w\f\7\4\p\o\7\5\w\b\1\5\g\7\m\l\i\f\l\z\p\i\0\1\4\q\i\k\0\p\s\3\2\a\b\3\5\w\n\u\s\w\o\w\m\6\d\4\l\4\g\b\o\t\k\h\d\x\h\u\8\a\i\i\o\o\z\5\b\5\f\k\9\5\5\v\q\t\8\f\v\e\t\x\s\l\y\2\6\2\0\9\9\o\r\9\f\i\q\5\g\o\a\o\q\z\3\n\i\o\i\u\p\g\k\3\k\i\z\x\2\4\l\k\e\j\9\h\8\g\n\o\m\w\q\x\7\m\t\3\4\6\6\f\1\i\m\4\7\2\9\r\2\t\o\j\d\4\h\p\k\u\y\w\2\7\q\m\0\x\o\i\q\o\b\5\x\w\l\4\y\z\e\g\d\9\t\8\1\h\j\v\h\t\7\i\8\v\1\k\r\s\j\9\u\3\p\o\q\a\8\z\s\q\p\4\z\z\w\q\k\c\0\a\7\a\z\v\o\4\2\g\z\h\s\u\m\y\n\y\u\7\k\0\k\r\v\t\2\p\q\o\c\t\f\q\2\g\f\l\c\1\6\q\h\r\e\f\t\l\8\d\e\9\v\h\w\a\x\z\k\4\o\g\z\w\t\l\g\4\a\0\d\7\q\z\f\x\4\p\9\e\4\x\g\a\u\6\v\h\s\3\e\v\e\u\6\m\q\x\n\s\y\a\8\o\p\e\b\8\c\o\0\6\z\y\0\y\7\j\2\r\w\1\7\t\1\d\7\f\c\z\j\2\1\c\t\m\c\r\6\r\j\9\u\9\r\l\6\u\a\a\z\g\5\4\1\9\2\p\i\s\w\5\t\h\c\g\s\s\0\w\w\v\z\0\c\x\y\8\v\8\v\5\f ]] 00:06:54.639 00:06:54.639 real 0m1.355s 00:06:54.639 user 0m0.731s 00:06:54.639 sys 0m0.293s 00:06:54.639 06:37:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:54.639 06:37:08 -- common/autotest_common.sh@10 -- # set +x 00:06:54.639 06:37:08 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:54.639 06:37:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:54.639 06:37:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:54.639 06:37:08 -- common/autotest_common.sh@10 -- # set +x 00:06:54.639 ************************************ 00:06:54.639 START TEST dd_flag_noatime 00:06:54.639 ************************************ 00:06:54.639 06:37:08 -- common/autotest_common.sh@1114 -- # noatime 00:06:54.639 06:37:08 -- dd/posix.sh@53 -- # local atime_if 00:06:54.639 06:37:08 -- dd/posix.sh@54 -- # local atime_of 00:06:54.639 06:37:08 -- dd/posix.sh@58 -- # gen_bytes 512 00:06:54.639 06:37:08 -- dd/common.sh@98 -- # xtrace_disable 00:06:54.639 06:37:08 -- common/autotest_common.sh@10 -- # set +x 00:06:54.639 06:37:08 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:54.639 06:37:08 -- dd/posix.sh@60 -- # atime_if=1734158228 00:06:54.639 06:37:08 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:54.639 06:37:08 -- dd/posix.sh@61 -- # atime_of=1734158228 00:06:54.639 06:37:08 -- dd/posix.sh@66 -- # sleep 1 00:06:55.575 06:37:09 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:55.575 [2024-12-14 06:37:09.510594] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.575 [2024-12-14 06:37:09.510852] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58234 ] 00:06:55.935 [2024-12-14 06:37:09.649980] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.935 [2024-12-14 06:37:09.716861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.935  [2024-12-14T06:37:10.186Z] Copying: 512/512 [B] (average 500 kBps) 00:06:56.194 00:06:56.194 06:37:09 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:56.194 06:37:09 -- dd/posix.sh@69 -- # (( atime_if == 1734158228 )) 00:06:56.194 06:37:09 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:56.194 06:37:09 -- dd/posix.sh@70 -- # (( atime_of == 1734158228 )) 00:06:56.194 06:37:09 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:56.194 [2024-12-14 06:37:10.010170] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.194 [2024-12-14 06:37:10.010440] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58240 ] 00:06:56.194 [2024-12-14 06:37:10.148047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.453 [2024-12-14 06:37:10.197874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.453  [2024-12-14T06:37:10.445Z] Copying: 512/512 [B] (average 500 kBps) 00:06:56.453 00:06:56.453 06:37:10 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:56.453 ************************************ 00:06:56.453 END TEST dd_flag_noatime 00:06:56.453 ************************************ 00:06:56.453 06:37:10 -- dd/posix.sh@73 -- # (( atime_if < 1734158230 )) 00:06:56.453 00:06:56.453 real 0m1.988s 00:06:56.453 user 0m0.534s 00:06:56.453 sys 0m0.213s 00:06:56.453 06:37:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:56.453 06:37:10 -- common/autotest_common.sh@10 -- # set +x 00:06:56.712 06:37:10 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:56.712 06:37:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:56.712 06:37:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.712 06:37:10 -- common/autotest_common.sh@10 -- # set +x 00:06:56.712 ************************************ 00:06:56.712 START TEST dd_flags_misc 00:06:56.712 ************************************ 00:06:56.712 06:37:10 -- common/autotest_common.sh@1114 -- # io 00:06:56.712 06:37:10 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:56.712 06:37:10 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:56.712 06:37:10 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:56.712 06:37:10 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:56.712 06:37:10 -- dd/posix.sh@86 -- # gen_bytes 512 00:06:56.712 06:37:10 -- dd/common.sh@98 -- # xtrace_disable 00:06:56.712 06:37:10 -- common/autotest_common.sh@10 -- # set +x 00:06:56.712 06:37:10 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:56.712 06:37:10 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:56.712 [2024-12-14 06:37:10.541139] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.712 [2024-12-14 06:37:10.541231] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58272 ] 00:06:56.712 [2024-12-14 06:37:10.678073] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.972 [2024-12-14 06:37:10.726460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.972  [2024-12-14T06:37:10.964Z] Copying: 512/512 [B] (average 500 kBps) 00:06:56.972 00:06:56.972 06:37:10 -- dd/posix.sh@93 -- # [[ y8ymu8rk9744u03v3mfmoxqjwgnjxsep35ms6fv5shxqm2vwdrm1rtsax7u98qk0pj3qgenuluojjxxvlc66wur56eimlkg306svjxuo6asxs3n0akkogx7e8xtwtvn0oyiqa7udfcd8ugv1suuw7ufpymo7buni4q347p1d0t662fsm5a730j1bzgxhkd49aeyl3mnh6w4pzoxll9tua514g5rkq2by71gjn5efknqd8rpedjy3btykyofg8x70ydmdf4in845wru6evmcznoxy4vu1ekbdl1twkvgnupxyt54h52f2jmmf8fmomdhhu8kqn5gdj1c53uaqy2ej3zbxawf7b8cq71oneed24ezatx1f5v1nurz72j05wdknn87fbgatae29t44zi752jtbfc97zf1jxx7a5leg69tagc9f9n32ql86yh6jf7cbo3vsu0anjj0ointekhiyg5d9b0nz4whepd0jvvnah0tn9ox3ipjvrqzh4rgli58gb == \y\8\y\m\u\8\r\k\9\7\4\4\u\0\3\v\3\m\f\m\o\x\q\j\w\g\n\j\x\s\e\p\3\5\m\s\6\f\v\5\s\h\x\q\m\2\v\w\d\r\m\1\r\t\s\a\x\7\u\9\8\q\k\0\p\j\3\q\g\e\n\u\l\u\o\j\j\x\x\v\l\c\6\6\w\u\r\5\6\e\i\m\l\k\g\3\0\6\s\v\j\x\u\o\6\a\s\x\s\3\n\0\a\k\k\o\g\x\7\e\8\x\t\w\t\v\n\0\o\y\i\q\a\7\u\d\f\c\d\8\u\g\v\1\s\u\u\w\7\u\f\p\y\m\o\7\b\u\n\i\4\q\3\4\7\p\1\d\0\t\6\6\2\f\s\m\5\a\7\3\0\j\1\b\z\g\x\h\k\d\4\9\a\e\y\l\3\m\n\h\6\w\4\p\z\o\x\l\l\9\t\u\a\5\1\4\g\5\r\k\q\2\b\y\7\1\g\j\n\5\e\f\k\n\q\d\8\r\p\e\d\j\y\3\b\t\y\k\y\o\f\g\8\x\7\0\y\d\m\d\f\4\i\n\8\4\5\w\r\u\6\e\v\m\c\z\n\o\x\y\4\v\u\1\e\k\b\d\l\1\t\w\k\v\g\n\u\p\x\y\t\5\4\h\5\2\f\2\j\m\m\f\8\f\m\o\m\d\h\h\u\8\k\q\n\5\g\d\j\1\c\5\3\u\a\q\y\2\e\j\3\z\b\x\a\w\f\7\b\8\c\q\7\1\o\n\e\e\d\2\4\e\z\a\t\x\1\f\5\v\1\n\u\r\z\7\2\j\0\5\w\d\k\n\n\8\7\f\b\g\a\t\a\e\2\9\t\4\4\z\i\7\5\2\j\t\b\f\c\9\7\z\f\1\j\x\x\7\a\5\l\e\g\6\9\t\a\g\c\9\f\9\n\3\2\q\l\8\6\y\h\6\j\f\7\c\b\o\3\v\s\u\0\a\n\j\j\0\o\i\n\t\e\k\h\i\y\g\5\d\9\b\0\n\z\4\w\h\e\p\d\0\j\v\v\n\a\h\0\t\n\9\o\x\3\i\p\j\v\r\q\z\h\4\r\g\l\i\5\8\g\b ]] 00:06:56.972 06:37:10 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:56.972 06:37:10 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:57.230 [2024-12-14 06:37:11.000258] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:57.230 [2024-12-14 06:37:11.000347] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58274 ] 00:06:57.230 [2024-12-14 06:37:11.137207] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.230 [2024-12-14 06:37:11.189520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.535  [2024-12-14T06:37:11.527Z] Copying: 512/512 [B] (average 500 kBps) 00:06:57.535 00:06:57.535 06:37:11 -- dd/posix.sh@93 -- # [[ y8ymu8rk9744u03v3mfmoxqjwgnjxsep35ms6fv5shxqm2vwdrm1rtsax7u98qk0pj3qgenuluojjxxvlc66wur56eimlkg306svjxuo6asxs3n0akkogx7e8xtwtvn0oyiqa7udfcd8ugv1suuw7ufpymo7buni4q347p1d0t662fsm5a730j1bzgxhkd49aeyl3mnh6w4pzoxll9tua514g5rkq2by71gjn5efknqd8rpedjy3btykyofg8x70ydmdf4in845wru6evmcznoxy4vu1ekbdl1twkvgnupxyt54h52f2jmmf8fmomdhhu8kqn5gdj1c53uaqy2ej3zbxawf7b8cq71oneed24ezatx1f5v1nurz72j05wdknn87fbgatae29t44zi752jtbfc97zf1jxx7a5leg69tagc9f9n32ql86yh6jf7cbo3vsu0anjj0ointekhiyg5d9b0nz4whepd0jvvnah0tn9ox3ipjvrqzh4rgli58gb == \y\8\y\m\u\8\r\k\9\7\4\4\u\0\3\v\3\m\f\m\o\x\q\j\w\g\n\j\x\s\e\p\3\5\m\s\6\f\v\5\s\h\x\q\m\2\v\w\d\r\m\1\r\t\s\a\x\7\u\9\8\q\k\0\p\j\3\q\g\e\n\u\l\u\o\j\j\x\x\v\l\c\6\6\w\u\r\5\6\e\i\m\l\k\g\3\0\6\s\v\j\x\u\o\6\a\s\x\s\3\n\0\a\k\k\o\g\x\7\e\8\x\t\w\t\v\n\0\o\y\i\q\a\7\u\d\f\c\d\8\u\g\v\1\s\u\u\w\7\u\f\p\y\m\o\7\b\u\n\i\4\q\3\4\7\p\1\d\0\t\6\6\2\f\s\m\5\a\7\3\0\j\1\b\z\g\x\h\k\d\4\9\a\e\y\l\3\m\n\h\6\w\4\p\z\o\x\l\l\9\t\u\a\5\1\4\g\5\r\k\q\2\b\y\7\1\g\j\n\5\e\f\k\n\q\d\8\r\p\e\d\j\y\3\b\t\y\k\y\o\f\g\8\x\7\0\y\d\m\d\f\4\i\n\8\4\5\w\r\u\6\e\v\m\c\z\n\o\x\y\4\v\u\1\e\k\b\d\l\1\t\w\k\v\g\n\u\p\x\y\t\5\4\h\5\2\f\2\j\m\m\f\8\f\m\o\m\d\h\h\u\8\k\q\n\5\g\d\j\1\c\5\3\u\a\q\y\2\e\j\3\z\b\x\a\w\f\7\b\8\c\q\7\1\o\n\e\e\d\2\4\e\z\a\t\x\1\f\5\v\1\n\u\r\z\7\2\j\0\5\w\d\k\n\n\8\7\f\b\g\a\t\a\e\2\9\t\4\4\z\i\7\5\2\j\t\b\f\c\9\7\z\f\1\j\x\x\7\a\5\l\e\g\6\9\t\a\g\c\9\f\9\n\3\2\q\l\8\6\y\h\6\j\f\7\c\b\o\3\v\s\u\0\a\n\j\j\0\o\i\n\t\e\k\h\i\y\g\5\d\9\b\0\n\z\4\w\h\e\p\d\0\j\v\v\n\a\h\0\t\n\9\o\x\3\i\p\j\v\r\q\z\h\4\r\g\l\i\5\8\g\b ]] 00:06:57.535 06:37:11 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:57.535 06:37:11 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:57.535 [2024-12-14 06:37:11.456661] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:57.536 [2024-12-14 06:37:11.456751] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58282 ] 00:06:57.829 [2024-12-14 06:37:11.594053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.829 [2024-12-14 06:37:11.641180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.829  [2024-12-14T06:37:12.079Z] Copying: 512/512 [B] (average 166 kBps) 00:06:58.087 00:06:58.087 06:37:11 -- dd/posix.sh@93 -- # [[ y8ymu8rk9744u03v3mfmoxqjwgnjxsep35ms6fv5shxqm2vwdrm1rtsax7u98qk0pj3qgenuluojjxxvlc66wur56eimlkg306svjxuo6asxs3n0akkogx7e8xtwtvn0oyiqa7udfcd8ugv1suuw7ufpymo7buni4q347p1d0t662fsm5a730j1bzgxhkd49aeyl3mnh6w4pzoxll9tua514g5rkq2by71gjn5efknqd8rpedjy3btykyofg8x70ydmdf4in845wru6evmcznoxy4vu1ekbdl1twkvgnupxyt54h52f2jmmf8fmomdhhu8kqn5gdj1c53uaqy2ej3zbxawf7b8cq71oneed24ezatx1f5v1nurz72j05wdknn87fbgatae29t44zi752jtbfc97zf1jxx7a5leg69tagc9f9n32ql86yh6jf7cbo3vsu0anjj0ointekhiyg5d9b0nz4whepd0jvvnah0tn9ox3ipjvrqzh4rgli58gb == \y\8\y\m\u\8\r\k\9\7\4\4\u\0\3\v\3\m\f\m\o\x\q\j\w\g\n\j\x\s\e\p\3\5\m\s\6\f\v\5\s\h\x\q\m\2\v\w\d\r\m\1\r\t\s\a\x\7\u\9\8\q\k\0\p\j\3\q\g\e\n\u\l\u\o\j\j\x\x\v\l\c\6\6\w\u\r\5\6\e\i\m\l\k\g\3\0\6\s\v\j\x\u\o\6\a\s\x\s\3\n\0\a\k\k\o\g\x\7\e\8\x\t\w\t\v\n\0\o\y\i\q\a\7\u\d\f\c\d\8\u\g\v\1\s\u\u\w\7\u\f\p\y\m\o\7\b\u\n\i\4\q\3\4\7\p\1\d\0\t\6\6\2\f\s\m\5\a\7\3\0\j\1\b\z\g\x\h\k\d\4\9\a\e\y\l\3\m\n\h\6\w\4\p\z\o\x\l\l\9\t\u\a\5\1\4\g\5\r\k\q\2\b\y\7\1\g\j\n\5\e\f\k\n\q\d\8\r\p\e\d\j\y\3\b\t\y\k\y\o\f\g\8\x\7\0\y\d\m\d\f\4\i\n\8\4\5\w\r\u\6\e\v\m\c\z\n\o\x\y\4\v\u\1\e\k\b\d\l\1\t\w\k\v\g\n\u\p\x\y\t\5\4\h\5\2\f\2\j\m\m\f\8\f\m\o\m\d\h\h\u\8\k\q\n\5\g\d\j\1\c\5\3\u\a\q\y\2\e\j\3\z\b\x\a\w\f\7\b\8\c\q\7\1\o\n\e\e\d\2\4\e\z\a\t\x\1\f\5\v\1\n\u\r\z\7\2\j\0\5\w\d\k\n\n\8\7\f\b\g\a\t\a\e\2\9\t\4\4\z\i\7\5\2\j\t\b\f\c\9\7\z\f\1\j\x\x\7\a\5\l\e\g\6\9\t\a\g\c\9\f\9\n\3\2\q\l\8\6\y\h\6\j\f\7\c\b\o\3\v\s\u\0\a\n\j\j\0\o\i\n\t\e\k\h\i\y\g\5\d\9\b\0\n\z\4\w\h\e\p\d\0\j\v\v\n\a\h\0\t\n\9\o\x\3\i\p\j\v\r\q\z\h\4\r\g\l\i\5\8\g\b ]] 00:06:58.087 06:37:11 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:58.087 06:37:11 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:58.087 [2024-12-14 06:37:11.919211] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:58.087 [2024-12-14 06:37:11.919303] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58289 ] 00:06:58.087 [2024-12-14 06:37:12.054706] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.346 [2024-12-14 06:37:12.102818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.346  [2024-12-14T06:37:12.338Z] Copying: 512/512 [B] (average 166 kBps) 00:06:58.346 00:06:58.346 06:37:12 -- dd/posix.sh@93 -- # [[ y8ymu8rk9744u03v3mfmoxqjwgnjxsep35ms6fv5shxqm2vwdrm1rtsax7u98qk0pj3qgenuluojjxxvlc66wur56eimlkg306svjxuo6asxs3n0akkogx7e8xtwtvn0oyiqa7udfcd8ugv1suuw7ufpymo7buni4q347p1d0t662fsm5a730j1bzgxhkd49aeyl3mnh6w4pzoxll9tua514g5rkq2by71gjn5efknqd8rpedjy3btykyofg8x70ydmdf4in845wru6evmcznoxy4vu1ekbdl1twkvgnupxyt54h52f2jmmf8fmomdhhu8kqn5gdj1c53uaqy2ej3zbxawf7b8cq71oneed24ezatx1f5v1nurz72j05wdknn87fbgatae29t44zi752jtbfc97zf1jxx7a5leg69tagc9f9n32ql86yh6jf7cbo3vsu0anjj0ointekhiyg5d9b0nz4whepd0jvvnah0tn9ox3ipjvrqzh4rgli58gb == \y\8\y\m\u\8\r\k\9\7\4\4\u\0\3\v\3\m\f\m\o\x\q\j\w\g\n\j\x\s\e\p\3\5\m\s\6\f\v\5\s\h\x\q\m\2\v\w\d\r\m\1\r\t\s\a\x\7\u\9\8\q\k\0\p\j\3\q\g\e\n\u\l\u\o\j\j\x\x\v\l\c\6\6\w\u\r\5\6\e\i\m\l\k\g\3\0\6\s\v\j\x\u\o\6\a\s\x\s\3\n\0\a\k\k\o\g\x\7\e\8\x\t\w\t\v\n\0\o\y\i\q\a\7\u\d\f\c\d\8\u\g\v\1\s\u\u\w\7\u\f\p\y\m\o\7\b\u\n\i\4\q\3\4\7\p\1\d\0\t\6\6\2\f\s\m\5\a\7\3\0\j\1\b\z\g\x\h\k\d\4\9\a\e\y\l\3\m\n\h\6\w\4\p\z\o\x\l\l\9\t\u\a\5\1\4\g\5\r\k\q\2\b\y\7\1\g\j\n\5\e\f\k\n\q\d\8\r\p\e\d\j\y\3\b\t\y\k\y\o\f\g\8\x\7\0\y\d\m\d\f\4\i\n\8\4\5\w\r\u\6\e\v\m\c\z\n\o\x\y\4\v\u\1\e\k\b\d\l\1\t\w\k\v\g\n\u\p\x\y\t\5\4\h\5\2\f\2\j\m\m\f\8\f\m\o\m\d\h\h\u\8\k\q\n\5\g\d\j\1\c\5\3\u\a\q\y\2\e\j\3\z\b\x\a\w\f\7\b\8\c\q\7\1\o\n\e\e\d\2\4\e\z\a\t\x\1\f\5\v\1\n\u\r\z\7\2\j\0\5\w\d\k\n\n\8\7\f\b\g\a\t\a\e\2\9\t\4\4\z\i\7\5\2\j\t\b\f\c\9\7\z\f\1\j\x\x\7\a\5\l\e\g\6\9\t\a\g\c\9\f\9\n\3\2\q\l\8\6\y\h\6\j\f\7\c\b\o\3\v\s\u\0\a\n\j\j\0\o\i\n\t\e\k\h\i\y\g\5\d\9\b\0\n\z\4\w\h\e\p\d\0\j\v\v\n\a\h\0\t\n\9\o\x\3\i\p\j\v\r\q\z\h\4\r\g\l\i\5\8\g\b ]] 00:06:58.346 06:37:12 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:58.346 06:37:12 -- dd/posix.sh@86 -- # gen_bytes 512 00:06:58.346 06:37:12 -- dd/common.sh@98 -- # xtrace_disable 00:06:58.346 06:37:12 -- common/autotest_common.sh@10 -- # set +x 00:06:58.346 06:37:12 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:58.346 06:37:12 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:58.605 [2024-12-14 06:37:12.369704] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:58.605 [2024-12-14 06:37:12.369987] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58297 ] 00:06:58.605 [2024-12-14 06:37:12.497300] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.605 [2024-12-14 06:37:12.546059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.863  [2024-12-14T06:37:12.855Z] Copying: 512/512 [B] (average 500 kBps) 00:06:58.863 00:06:58.863 06:37:12 -- dd/posix.sh@93 -- # [[ 1p6okkq63vxa39tj5qabpz0dqhk0ahelo7x14wnv421anpu3xt9mjdae4j2uiy8q7bsjf3gu8k6d4xp7w9ocj85sp09inhllqabw9oll4nztfr5eobnpagp3lru33n16dx9c67llx5glj6a18j3i33ktc2vb4ac7msvcbdgns462gffoafd9mf6odwiry30ookobflfj67wa2x6fdh44rbb10fewu3nms2glrlla3vtdarieca4s4rj7dp07gw18dig16pa7mjth4idxxkwf1ji50ev9sc0sgf148vgcoup2hju7keqfpml91fplimp0zge7sd1jgebunoccvpegspvqwe958ko8m4lnivkqh7o57mlufzkua2l963mulqsjwxg9ygmw7tanjx0vh7w38te7pd3fb5891jmhasbb79z08si64z1wp4wc2vd0iude2mnpluh7rm2f8whmpn9ehp22dr6cj1wsnh4x34ncnaqen9sbuil1735bikfr7y30 == \1\p\6\o\k\k\q\6\3\v\x\a\3\9\t\j\5\q\a\b\p\z\0\d\q\h\k\0\a\h\e\l\o\7\x\1\4\w\n\v\4\2\1\a\n\p\u\3\x\t\9\m\j\d\a\e\4\j\2\u\i\y\8\q\7\b\s\j\f\3\g\u\8\k\6\d\4\x\p\7\w\9\o\c\j\8\5\s\p\0\9\i\n\h\l\l\q\a\b\w\9\o\l\l\4\n\z\t\f\r\5\e\o\b\n\p\a\g\p\3\l\r\u\3\3\n\1\6\d\x\9\c\6\7\l\l\x\5\g\l\j\6\a\1\8\j\3\i\3\3\k\t\c\2\v\b\4\a\c\7\m\s\v\c\b\d\g\n\s\4\6\2\g\f\f\o\a\f\d\9\m\f\6\o\d\w\i\r\y\3\0\o\o\k\o\b\f\l\f\j\6\7\w\a\2\x\6\f\d\h\4\4\r\b\b\1\0\f\e\w\u\3\n\m\s\2\g\l\r\l\l\a\3\v\t\d\a\r\i\e\c\a\4\s\4\r\j\7\d\p\0\7\g\w\1\8\d\i\g\1\6\p\a\7\m\j\t\h\4\i\d\x\x\k\w\f\1\j\i\5\0\e\v\9\s\c\0\s\g\f\1\4\8\v\g\c\o\u\p\2\h\j\u\7\k\e\q\f\p\m\l\9\1\f\p\l\i\m\p\0\z\g\e\7\s\d\1\j\g\e\b\u\n\o\c\c\v\p\e\g\s\p\v\q\w\e\9\5\8\k\o\8\m\4\l\n\i\v\k\q\h\7\o\5\7\m\l\u\f\z\k\u\a\2\l\9\6\3\m\u\l\q\s\j\w\x\g\9\y\g\m\w\7\t\a\n\j\x\0\v\h\7\w\3\8\t\e\7\p\d\3\f\b\5\8\9\1\j\m\h\a\s\b\b\7\9\z\0\8\s\i\6\4\z\1\w\p\4\w\c\2\v\d\0\i\u\d\e\2\m\n\p\l\u\h\7\r\m\2\f\8\w\h\m\p\n\9\e\h\p\2\2\d\r\6\c\j\1\w\s\n\h\4\x\3\4\n\c\n\a\q\e\n\9\s\b\u\i\l\1\7\3\5\b\i\k\f\r\7\y\3\0 ]] 00:06:58.863 06:37:12 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:58.863 06:37:12 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:58.864 [2024-12-14 06:37:12.810268] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:58.864 [2024-12-14 06:37:12.810360] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58304 ] 00:06:59.122 [2024-12-14 06:37:12.945747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.122 [2024-12-14 06:37:12.993767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.122  [2024-12-14T06:37:13.373Z] Copying: 512/512 [B] (average 500 kBps) 00:06:59.381 00:06:59.381 06:37:13 -- dd/posix.sh@93 -- # [[ 1p6okkq63vxa39tj5qabpz0dqhk0ahelo7x14wnv421anpu3xt9mjdae4j2uiy8q7bsjf3gu8k6d4xp7w9ocj85sp09inhllqabw9oll4nztfr5eobnpagp3lru33n16dx9c67llx5glj6a18j3i33ktc2vb4ac7msvcbdgns462gffoafd9mf6odwiry30ookobflfj67wa2x6fdh44rbb10fewu3nms2glrlla3vtdarieca4s4rj7dp07gw18dig16pa7mjth4idxxkwf1ji50ev9sc0sgf148vgcoup2hju7keqfpml91fplimp0zge7sd1jgebunoccvpegspvqwe958ko8m4lnivkqh7o57mlufzkua2l963mulqsjwxg9ygmw7tanjx0vh7w38te7pd3fb5891jmhasbb79z08si64z1wp4wc2vd0iude2mnpluh7rm2f8whmpn9ehp22dr6cj1wsnh4x34ncnaqen9sbuil1735bikfr7y30 == \1\p\6\o\k\k\q\6\3\v\x\a\3\9\t\j\5\q\a\b\p\z\0\d\q\h\k\0\a\h\e\l\o\7\x\1\4\w\n\v\4\2\1\a\n\p\u\3\x\t\9\m\j\d\a\e\4\j\2\u\i\y\8\q\7\b\s\j\f\3\g\u\8\k\6\d\4\x\p\7\w\9\o\c\j\8\5\s\p\0\9\i\n\h\l\l\q\a\b\w\9\o\l\l\4\n\z\t\f\r\5\e\o\b\n\p\a\g\p\3\l\r\u\3\3\n\1\6\d\x\9\c\6\7\l\l\x\5\g\l\j\6\a\1\8\j\3\i\3\3\k\t\c\2\v\b\4\a\c\7\m\s\v\c\b\d\g\n\s\4\6\2\g\f\f\o\a\f\d\9\m\f\6\o\d\w\i\r\y\3\0\o\o\k\o\b\f\l\f\j\6\7\w\a\2\x\6\f\d\h\4\4\r\b\b\1\0\f\e\w\u\3\n\m\s\2\g\l\r\l\l\a\3\v\t\d\a\r\i\e\c\a\4\s\4\r\j\7\d\p\0\7\g\w\1\8\d\i\g\1\6\p\a\7\m\j\t\h\4\i\d\x\x\k\w\f\1\j\i\5\0\e\v\9\s\c\0\s\g\f\1\4\8\v\g\c\o\u\p\2\h\j\u\7\k\e\q\f\p\m\l\9\1\f\p\l\i\m\p\0\z\g\e\7\s\d\1\j\g\e\b\u\n\o\c\c\v\p\e\g\s\p\v\q\w\e\9\5\8\k\o\8\m\4\l\n\i\v\k\q\h\7\o\5\7\m\l\u\f\z\k\u\a\2\l\9\6\3\m\u\l\q\s\j\w\x\g\9\y\g\m\w\7\t\a\n\j\x\0\v\h\7\w\3\8\t\e\7\p\d\3\f\b\5\8\9\1\j\m\h\a\s\b\b\7\9\z\0\8\s\i\6\4\z\1\w\p\4\w\c\2\v\d\0\i\u\d\e\2\m\n\p\l\u\h\7\r\m\2\f\8\w\h\m\p\n\9\e\h\p\2\2\d\r\6\c\j\1\w\s\n\h\4\x\3\4\n\c\n\a\q\e\n\9\s\b\u\i\l\1\7\3\5\b\i\k\f\r\7\y\3\0 ]] 00:06:59.381 06:37:13 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:59.381 06:37:13 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:59.381 [2024-12-14 06:37:13.237974] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.381 [2024-12-14 06:37:13.238055] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58311 ] 00:06:59.381 [2024-12-14 06:37:13.361473] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.640 [2024-12-14 06:37:13.414554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.640  [2024-12-14T06:37:13.890Z] Copying: 512/512 [B] (average 500 kBps) 00:06:59.899 00:06:59.899 06:37:13 -- dd/posix.sh@93 -- # [[ 1p6okkq63vxa39tj5qabpz0dqhk0ahelo7x14wnv421anpu3xt9mjdae4j2uiy8q7bsjf3gu8k6d4xp7w9ocj85sp09inhllqabw9oll4nztfr5eobnpagp3lru33n16dx9c67llx5glj6a18j3i33ktc2vb4ac7msvcbdgns462gffoafd9mf6odwiry30ookobflfj67wa2x6fdh44rbb10fewu3nms2glrlla3vtdarieca4s4rj7dp07gw18dig16pa7mjth4idxxkwf1ji50ev9sc0sgf148vgcoup2hju7keqfpml91fplimp0zge7sd1jgebunoccvpegspvqwe958ko8m4lnivkqh7o57mlufzkua2l963mulqsjwxg9ygmw7tanjx0vh7w38te7pd3fb5891jmhasbb79z08si64z1wp4wc2vd0iude2mnpluh7rm2f8whmpn9ehp22dr6cj1wsnh4x34ncnaqen9sbuil1735bikfr7y30 == \1\p\6\o\k\k\q\6\3\v\x\a\3\9\t\j\5\q\a\b\p\z\0\d\q\h\k\0\a\h\e\l\o\7\x\1\4\w\n\v\4\2\1\a\n\p\u\3\x\t\9\m\j\d\a\e\4\j\2\u\i\y\8\q\7\b\s\j\f\3\g\u\8\k\6\d\4\x\p\7\w\9\o\c\j\8\5\s\p\0\9\i\n\h\l\l\q\a\b\w\9\o\l\l\4\n\z\t\f\r\5\e\o\b\n\p\a\g\p\3\l\r\u\3\3\n\1\6\d\x\9\c\6\7\l\l\x\5\g\l\j\6\a\1\8\j\3\i\3\3\k\t\c\2\v\b\4\a\c\7\m\s\v\c\b\d\g\n\s\4\6\2\g\f\f\o\a\f\d\9\m\f\6\o\d\w\i\r\y\3\0\o\o\k\o\b\f\l\f\j\6\7\w\a\2\x\6\f\d\h\4\4\r\b\b\1\0\f\e\w\u\3\n\m\s\2\g\l\r\l\l\a\3\v\t\d\a\r\i\e\c\a\4\s\4\r\j\7\d\p\0\7\g\w\1\8\d\i\g\1\6\p\a\7\m\j\t\h\4\i\d\x\x\k\w\f\1\j\i\5\0\e\v\9\s\c\0\s\g\f\1\4\8\v\g\c\o\u\p\2\h\j\u\7\k\e\q\f\p\m\l\9\1\f\p\l\i\m\p\0\z\g\e\7\s\d\1\j\g\e\b\u\n\o\c\c\v\p\e\g\s\p\v\q\w\e\9\5\8\k\o\8\m\4\l\n\i\v\k\q\h\7\o\5\7\m\l\u\f\z\k\u\a\2\l\9\6\3\m\u\l\q\s\j\w\x\g\9\y\g\m\w\7\t\a\n\j\x\0\v\h\7\w\3\8\t\e\7\p\d\3\f\b\5\8\9\1\j\m\h\a\s\b\b\7\9\z\0\8\s\i\6\4\z\1\w\p\4\w\c\2\v\d\0\i\u\d\e\2\m\n\p\l\u\h\7\r\m\2\f\8\w\h\m\p\n\9\e\h\p\2\2\d\r\6\c\j\1\w\s\n\h\4\x\3\4\n\c\n\a\q\e\n\9\s\b\u\i\l\1\7\3\5\b\i\k\f\r\7\y\3\0 ]] 00:06:59.899 06:37:13 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:59.899 06:37:13 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:59.899 [2024-12-14 06:37:13.691695] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.899 [2024-12-14 06:37:13.691790] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58321 ] 00:06:59.899 [2024-12-14 06:37:13.829671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.899 [2024-12-14 06:37:13.878940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.157  [2024-12-14T06:37:14.149Z] Copying: 512/512 [B] (average 500 kBps) 00:07:00.157 00:07:00.157 ************************************ 00:07:00.157 END TEST dd_flags_misc 00:07:00.157 ************************************ 00:07:00.157 06:37:14 -- dd/posix.sh@93 -- # [[ 1p6okkq63vxa39tj5qabpz0dqhk0ahelo7x14wnv421anpu3xt9mjdae4j2uiy8q7bsjf3gu8k6d4xp7w9ocj85sp09inhllqabw9oll4nztfr5eobnpagp3lru33n16dx9c67llx5glj6a18j3i33ktc2vb4ac7msvcbdgns462gffoafd9mf6odwiry30ookobflfj67wa2x6fdh44rbb10fewu3nms2glrlla3vtdarieca4s4rj7dp07gw18dig16pa7mjth4idxxkwf1ji50ev9sc0sgf148vgcoup2hju7keqfpml91fplimp0zge7sd1jgebunoccvpegspvqwe958ko8m4lnivkqh7o57mlufzkua2l963mulqsjwxg9ygmw7tanjx0vh7w38te7pd3fb5891jmhasbb79z08si64z1wp4wc2vd0iude2mnpluh7rm2f8whmpn9ehp22dr6cj1wsnh4x34ncnaqen9sbuil1735bikfr7y30 == \1\p\6\o\k\k\q\6\3\v\x\a\3\9\t\j\5\q\a\b\p\z\0\d\q\h\k\0\a\h\e\l\o\7\x\1\4\w\n\v\4\2\1\a\n\p\u\3\x\t\9\m\j\d\a\e\4\j\2\u\i\y\8\q\7\b\s\j\f\3\g\u\8\k\6\d\4\x\p\7\w\9\o\c\j\8\5\s\p\0\9\i\n\h\l\l\q\a\b\w\9\o\l\l\4\n\z\t\f\r\5\e\o\b\n\p\a\g\p\3\l\r\u\3\3\n\1\6\d\x\9\c\6\7\l\l\x\5\g\l\j\6\a\1\8\j\3\i\3\3\k\t\c\2\v\b\4\a\c\7\m\s\v\c\b\d\g\n\s\4\6\2\g\f\f\o\a\f\d\9\m\f\6\o\d\w\i\r\y\3\0\o\o\k\o\b\f\l\f\j\6\7\w\a\2\x\6\f\d\h\4\4\r\b\b\1\0\f\e\w\u\3\n\m\s\2\g\l\r\l\l\a\3\v\t\d\a\r\i\e\c\a\4\s\4\r\j\7\d\p\0\7\g\w\1\8\d\i\g\1\6\p\a\7\m\j\t\h\4\i\d\x\x\k\w\f\1\j\i\5\0\e\v\9\s\c\0\s\g\f\1\4\8\v\g\c\o\u\p\2\h\j\u\7\k\e\q\f\p\m\l\9\1\f\p\l\i\m\p\0\z\g\e\7\s\d\1\j\g\e\b\u\n\o\c\c\v\p\e\g\s\p\v\q\w\e\9\5\8\k\o\8\m\4\l\n\i\v\k\q\h\7\o\5\7\m\l\u\f\z\k\u\a\2\l\9\6\3\m\u\l\q\s\j\w\x\g\9\y\g\m\w\7\t\a\n\j\x\0\v\h\7\w\3\8\t\e\7\p\d\3\f\b\5\8\9\1\j\m\h\a\s\b\b\7\9\z\0\8\s\i\6\4\z\1\w\p\4\w\c\2\v\d\0\i\u\d\e\2\m\n\p\l\u\h\7\r\m\2\f\8\w\h\m\p\n\9\e\h\p\2\2\d\r\6\c\j\1\w\s\n\h\4\x\3\4\n\c\n\a\q\e\n\9\s\b\u\i\l\1\7\3\5\b\i\k\f\r\7\y\3\0 ]] 00:07:00.157 00:07:00.157 real 0m3.619s 00:07:00.157 user 0m1.943s 00:07:00.157 sys 0m0.705s 00:07:00.157 06:37:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.157 06:37:14 -- common/autotest_common.sh@10 -- # set +x 00:07:00.157 06:37:14 -- dd/posix.sh@131 -- # tests_forced_aio 00:07:00.157 06:37:14 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:00.157 * Second test run, disabling liburing, forcing AIO 00:07:00.157 06:37:14 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:00.421 06:37:14 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:00.421 06:37:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:00.421 06:37:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.421 06:37:14 -- common/autotest_common.sh@10 -- # set +x 00:07:00.421 ************************************ 00:07:00.421 START TEST dd_flag_append_forced_aio 00:07:00.421 ************************************ 00:07:00.421 06:37:14 -- common/autotest_common.sh@1114 -- # append 00:07:00.421 06:37:14 -- dd/posix.sh@16 -- # local dump0 00:07:00.421 06:37:14 -- dd/posix.sh@17 -- # local dump1 00:07:00.421 06:37:14 -- dd/posix.sh@19 -- # gen_bytes 32 00:07:00.421 06:37:14 -- dd/common.sh@98 -- # xtrace_disable 00:07:00.421 06:37:14 -- common/autotest_common.sh@10 -- # set +x 00:07:00.421 06:37:14 -- dd/posix.sh@19 -- # dump0=olnwum04z6h3wliw0ie568ksvxir75zd 00:07:00.421 06:37:14 -- dd/posix.sh@20 -- # gen_bytes 32 00:07:00.421 06:37:14 -- dd/common.sh@98 -- # xtrace_disable 00:07:00.421 06:37:14 -- common/autotest_common.sh@10 -- # set +x 00:07:00.421 06:37:14 -- dd/posix.sh@20 -- # dump1=5thwdfpdmwpu9fi91o9emjtmgw3yjvkq 00:07:00.421 06:37:14 -- dd/posix.sh@22 -- # printf %s olnwum04z6h3wliw0ie568ksvxir75zd 00:07:00.421 06:37:14 -- dd/posix.sh@23 -- # printf %s 5thwdfpdmwpu9fi91o9emjtmgw3yjvkq 00:07:00.421 06:37:14 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:00.421 [2024-12-14 06:37:14.215278] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:00.421 [2024-12-14 06:37:14.215551] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58343 ] 00:07:00.421 [2024-12-14 06:37:14.355478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.421 [2024-12-14 06:37:14.404919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.681  [2024-12-14T06:37:14.673Z] Copying: 32/32 [B] (average 31 kBps) 00:07:00.681 00:07:00.681 06:37:14 -- dd/posix.sh@27 -- # [[ 5thwdfpdmwpu9fi91o9emjtmgw3yjvkqolnwum04z6h3wliw0ie568ksvxir75zd == \5\t\h\w\d\f\p\d\m\w\p\u\9\f\i\9\1\o\9\e\m\j\t\m\g\w\3\y\j\v\k\q\o\l\n\w\u\m\0\4\z\6\h\3\w\l\i\w\0\i\e\5\6\8\k\s\v\x\i\r\7\5\z\d ]] 00:07:00.681 00:07:00.681 real 0m0.469s 00:07:00.681 user 0m0.252s 00:07:00.681 sys 0m0.098s 00:07:00.681 ************************************ 00:07:00.681 END TEST dd_flag_append_forced_aio 00:07:00.681 ************************************ 00:07:00.681 06:37:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:00.681 06:37:14 -- common/autotest_common.sh@10 -- # set +x 00:07:00.681 06:37:14 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:00.681 06:37:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:00.681 06:37:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.681 06:37:14 -- common/autotest_common.sh@10 -- # set +x 00:07:00.939 ************************************ 00:07:00.939 START TEST dd_flag_directory_forced_aio 00:07:00.939 ************************************ 00:07:00.939 06:37:14 -- common/autotest_common.sh@1114 -- # directory 00:07:00.939 06:37:14 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:00.939 06:37:14 -- common/autotest_common.sh@650 -- # local es=0 00:07:00.939 06:37:14 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:00.939 06:37:14 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.939 06:37:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.939 06:37:14 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.939 06:37:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.939 06:37:14 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.939 06:37:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:00.939 06:37:14 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:00.939 06:37:14 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:00.939 06:37:14 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:00.939 [2024-12-14 06:37:14.722423] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:00.939 [2024-12-14 06:37:14.722505] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58374 ] 00:07:00.939 [2024-12-14 06:37:14.854314] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.939 [2024-12-14 06:37:14.902180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.200 [2024-12-14 06:37:14.948411] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:01.200 [2024-12-14 06:37:14.948460] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:01.200 [2024-12-14 06:37:14.948488] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:01.200 [2024-12-14 06:37:15.007582] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:01.200 06:37:15 -- common/autotest_common.sh@653 -- # es=236 00:07:01.200 06:37:15 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.200 06:37:15 -- common/autotest_common.sh@662 -- # es=108 00:07:01.200 06:37:15 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:01.200 06:37:15 -- common/autotest_common.sh@670 -- # es=1 00:07:01.200 06:37:15 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.200 06:37:15 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:01.200 06:37:15 -- common/autotest_common.sh@650 -- # local es=0 00:07:01.200 06:37:15 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:01.200 06:37:15 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.200 06:37:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.200 06:37:15 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.200 06:37:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.200 06:37:15 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.200 06:37:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.200 06:37:15 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.200 06:37:15 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.200 06:37:15 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:01.200 [2024-12-14 06:37:15.158707] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:01.200 [2024-12-14 06:37:15.158806] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58379 ] 00:07:01.458 [2024-12-14 06:37:15.294823] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.458 [2024-12-14 06:37:15.341591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.458 [2024-12-14 06:37:15.386315] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:01.458 [2024-12-14 06:37:15.386365] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:01.458 [2024-12-14 06:37:15.386394] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:01.458 [2024-12-14 06:37:15.445582] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:01.716 06:37:15 -- common/autotest_common.sh@653 -- # es=236 00:07:01.716 06:37:15 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.716 06:37:15 -- common/autotest_common.sh@662 -- # es=108 00:07:01.716 06:37:15 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:01.716 06:37:15 -- common/autotest_common.sh@670 -- # es=1 00:07:01.716 06:37:15 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.716 00:07:01.716 real 0m0.870s 00:07:01.716 user 0m0.481s 00:07:01.716 sys 0m0.181s 00:07:01.716 06:37:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:01.716 06:37:15 -- common/autotest_common.sh@10 -- # set +x 00:07:01.716 ************************************ 00:07:01.716 END TEST dd_flag_directory_forced_aio 00:07:01.716 ************************************ 00:07:01.716 06:37:15 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:01.716 06:37:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:01.716 06:37:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.716 06:37:15 -- common/autotest_common.sh@10 -- # set +x 00:07:01.716 ************************************ 00:07:01.716 START TEST dd_flag_nofollow_forced_aio 00:07:01.716 ************************************ 00:07:01.716 06:37:15 -- common/autotest_common.sh@1114 -- # nofollow 00:07:01.716 06:37:15 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:01.716 06:37:15 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:01.716 06:37:15 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:01.716 06:37:15 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:01.716 06:37:15 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:01.716 06:37:15 -- common/autotest_common.sh@650 -- # local es=0 00:07:01.716 06:37:15 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:01.716 06:37:15 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.716 06:37:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.716 06:37:15 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.716 06:37:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.716 06:37:15 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.716 06:37:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.716 06:37:15 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.716 06:37:15 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.716 06:37:15 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:01.716 [2024-12-14 06:37:15.652578] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:01.716 [2024-12-14 06:37:15.652664] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58408 ] 00:07:01.975 [2024-12-14 06:37:15.781982] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.975 [2024-12-14 06:37:15.832298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.975 [2024-12-14 06:37:15.875707] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:01.975 [2024-12-14 06:37:15.875759] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:01.975 [2024-12-14 06:37:15.875790] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:01.975 [2024-12-14 06:37:15.934879] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:02.233 06:37:16 -- common/autotest_common.sh@653 -- # es=216 00:07:02.233 06:37:16 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.233 06:37:16 -- common/autotest_common.sh@662 -- # es=88 00:07:02.233 06:37:16 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:02.233 06:37:16 -- common/autotest_common.sh@670 -- # es=1 00:07:02.233 06:37:16 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.233 06:37:16 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:02.233 06:37:16 -- common/autotest_common.sh@650 -- # local es=0 00:07:02.233 06:37:16 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:02.233 06:37:16 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.233 06:37:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.233 06:37:16 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.233 06:37:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.233 06:37:16 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.233 06:37:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.233 06:37:16 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.233 06:37:16 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:02.233 06:37:16 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:02.233 [2024-12-14 06:37:16.091852] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:02.233 [2024-12-14 06:37:16.091961] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58417 ] 00:07:02.491 [2024-12-14 06:37:16.228600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.491 [2024-12-14 06:37:16.280285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.491 [2024-12-14 06:37:16.323630] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:02.491 [2024-12-14 06:37:16.323958] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:02.491 [2024-12-14 06:37:16.323977] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:02.491 [2024-12-14 06:37:16.381115] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:02.491 06:37:16 -- common/autotest_common.sh@653 -- # es=216 00:07:02.491 06:37:16 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.491 06:37:16 -- common/autotest_common.sh@662 -- # es=88 00:07:02.491 06:37:16 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:02.491 06:37:16 -- common/autotest_common.sh@670 -- # es=1 00:07:02.491 06:37:16 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.491 06:37:16 -- dd/posix.sh@46 -- # gen_bytes 512 00:07:02.491 06:37:16 -- dd/common.sh@98 -- # xtrace_disable 00:07:02.491 06:37:16 -- common/autotest_common.sh@10 -- # set +x 00:07:02.749 06:37:16 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:02.749 [2024-12-14 06:37:16.522206] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:02.749 [2024-12-14 06:37:16.522288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58425 ] 00:07:02.749 [2024-12-14 06:37:16.644127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.749 [2024-12-14 06:37:16.697740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.007  [2024-12-14T06:37:16.999Z] Copying: 512/512 [B] (average 500 kBps) 00:07:03.007 00:07:03.007 ************************************ 00:07:03.007 END TEST dd_flag_nofollow_forced_aio 00:07:03.007 ************************************ 00:07:03.007 06:37:16 -- dd/posix.sh@49 -- # [[ r27p8lvbtqmcfuwym5switwr00oog6jmjplnd5se34ec7expkb3qf5f4nrb8qvedjn0cf66ka8deegr3vd77rwmgmw07yum6j9yz2hvpax4hlc7ixmuskjhiywggq7quryfsha5x6dfsh5mmdtu6r4njg4yez560ksat3yar9rvnqtnbm6k4snx2wxl2gx47el02bhvp2ig9hq5utafm2n8drzcawhd95p31uvr69d68mqe03gxw4u3qk9dc64i1t1nh1efutxgwc7jlbrev1sfrtr0r3rbtg5vkc639r2zjrpmqniyigpx7t37gjnozezq87vvf5abfjf6r3jk1tntem8qkn3s5gn4b3mli01yfszv8dt0da9u5oouk1mbwokudxc7vh7ovd2uqojpvxdrdlnwsiyj4bxx7i1wfiuumsmz2gsi3dl0xdgqthsuxpl3rrawyduk7vu1lv8bbaoz80fp9ljqmrpfcv1ha24jzpo8al2huvo74s7d6idt0 == \r\2\7\p\8\l\v\b\t\q\m\c\f\u\w\y\m\5\s\w\i\t\w\r\0\0\o\o\g\6\j\m\j\p\l\n\d\5\s\e\3\4\e\c\7\e\x\p\k\b\3\q\f\5\f\4\n\r\b\8\q\v\e\d\j\n\0\c\f\6\6\k\a\8\d\e\e\g\r\3\v\d\7\7\r\w\m\g\m\w\0\7\y\u\m\6\j\9\y\z\2\h\v\p\a\x\4\h\l\c\7\i\x\m\u\s\k\j\h\i\y\w\g\g\q\7\q\u\r\y\f\s\h\a\5\x\6\d\f\s\h\5\m\m\d\t\u\6\r\4\n\j\g\4\y\e\z\5\6\0\k\s\a\t\3\y\a\r\9\r\v\n\q\t\n\b\m\6\k\4\s\n\x\2\w\x\l\2\g\x\4\7\e\l\0\2\b\h\v\p\2\i\g\9\h\q\5\u\t\a\f\m\2\n\8\d\r\z\c\a\w\h\d\9\5\p\3\1\u\v\r\6\9\d\6\8\m\q\e\0\3\g\x\w\4\u\3\q\k\9\d\c\6\4\i\1\t\1\n\h\1\e\f\u\t\x\g\w\c\7\j\l\b\r\e\v\1\s\f\r\t\r\0\r\3\r\b\t\g\5\v\k\c\6\3\9\r\2\z\j\r\p\m\q\n\i\y\i\g\p\x\7\t\3\7\g\j\n\o\z\e\z\q\8\7\v\v\f\5\a\b\f\j\f\6\r\3\j\k\1\t\n\t\e\m\8\q\k\n\3\s\5\g\n\4\b\3\m\l\i\0\1\y\f\s\z\v\8\d\t\0\d\a\9\u\5\o\o\u\k\1\m\b\w\o\k\u\d\x\c\7\v\h\7\o\v\d\2\u\q\o\j\p\v\x\d\r\d\l\n\w\s\i\y\j\4\b\x\x\7\i\1\w\f\i\u\u\m\s\m\z\2\g\s\i\3\d\l\0\x\d\g\q\t\h\s\u\x\p\l\3\r\r\a\w\y\d\u\k\7\v\u\1\l\v\8\b\b\a\o\z\8\0\f\p\9\l\j\q\m\r\p\f\c\v\1\h\a\2\4\j\z\p\o\8\a\l\2\h\u\v\o\7\4\s\7\d\6\i\d\t\0 ]] 00:07:03.007 00:07:03.007 real 0m1.321s 00:07:03.007 user 0m0.726s 00:07:03.007 sys 0m0.262s 00:07:03.007 06:37:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.007 06:37:16 -- common/autotest_common.sh@10 -- # set +x 00:07:03.007 06:37:16 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:03.007 06:37:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:03.007 06:37:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.007 06:37:16 -- common/autotest_common.sh@10 -- # set +x 00:07:03.007 ************************************ 00:07:03.007 START TEST dd_flag_noatime_forced_aio 00:07:03.007 ************************************ 00:07:03.007 06:37:16 -- common/autotest_common.sh@1114 -- # noatime 00:07:03.007 06:37:16 -- dd/posix.sh@53 -- # local atime_if 00:07:03.007 06:37:16 -- dd/posix.sh@54 -- # local atime_of 00:07:03.007 06:37:16 -- dd/posix.sh@58 -- # gen_bytes 512 00:07:03.007 06:37:16 -- dd/common.sh@98 -- # xtrace_disable 00:07:03.007 06:37:16 -- common/autotest_common.sh@10 -- # set +x 00:07:03.007 06:37:16 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:03.007 06:37:16 -- dd/posix.sh@60 -- # atime_if=1734158236 00:07:03.007 06:37:16 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:03.007 06:37:16 -- dd/posix.sh@61 -- # atime_of=1734158236 00:07:03.007 06:37:16 -- dd/posix.sh@66 -- # sleep 1 00:07:04.382 06:37:17 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:04.382 [2024-12-14 06:37:18.047630] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:04.382 [2024-12-14 06:37:18.047747] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58467 ] 00:07:04.382 [2024-12-14 06:37:18.184101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.382 [2024-12-14 06:37:18.252794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.382  [2024-12-14T06:37:18.632Z] Copying: 512/512 [B] (average 500 kBps) 00:07:04.640 00:07:04.640 06:37:18 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:04.640 06:37:18 -- dd/posix.sh@69 -- # (( atime_if == 1734158236 )) 00:07:04.640 06:37:18 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:04.640 06:37:18 -- dd/posix.sh@70 -- # (( atime_of == 1734158236 )) 00:07:04.640 06:37:18 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:04.640 [2024-12-14 06:37:18.573576] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:04.640 [2024-12-14 06:37:18.573670] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58479 ] 00:07:04.898 [2024-12-14 06:37:18.711578] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.898 [2024-12-14 06:37:18.780508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.898  [2024-12-14T06:37:19.148Z] Copying: 512/512 [B] (average 500 kBps) 00:07:05.156 00:07:05.156 06:37:19 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:05.156 06:37:19 -- dd/posix.sh@73 -- # (( atime_if < 1734158238 )) 00:07:05.156 00:07:05.156 real 0m2.056s 00:07:05.156 user 0m0.577s 00:07:05.156 sys 0m0.228s 00:07:05.156 06:37:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:05.156 06:37:19 -- common/autotest_common.sh@10 -- # set +x 00:07:05.156 ************************************ 00:07:05.156 END TEST dd_flag_noatime_forced_aio 00:07:05.156 ************************************ 00:07:05.156 06:37:19 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:05.156 06:37:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:05.156 06:37:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.156 06:37:19 -- common/autotest_common.sh@10 -- # set +x 00:07:05.156 ************************************ 00:07:05.156 START TEST dd_flags_misc_forced_aio 00:07:05.156 ************************************ 00:07:05.156 06:37:19 -- common/autotest_common.sh@1114 -- # io 00:07:05.156 06:37:19 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:05.156 06:37:19 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:05.156 06:37:19 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:05.156 06:37:19 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:05.156 06:37:19 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:05.156 06:37:19 -- dd/common.sh@98 -- # xtrace_disable 00:07:05.156 06:37:19 -- common/autotest_common.sh@10 -- # set +x 00:07:05.156 06:37:19 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:05.156 06:37:19 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:05.156 [2024-12-14 06:37:19.140500] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:05.156 [2024-12-14 06:37:19.140743] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58505 ] 00:07:05.415 [2024-12-14 06:37:19.278448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.415 [2024-12-14 06:37:19.327750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.415  [2024-12-14T06:37:19.665Z] Copying: 512/512 [B] (average 500 kBps) 00:07:05.673 00:07:05.673 06:37:19 -- dd/posix.sh@93 -- # [[ zjcoppdjqcql2vl36cx9t4gmmely1uoi6g27o6r4ctdxuwy6dabufkyaehhddnj00xoh6m5do32crauhsgwybilmxda05dfwpg7608ir37o5yq9qsq0mtw7qgi73sq2xl7qom6fdb23l3jugaj2j6xjnhc6mpiasyp2xr6zt6urt0fuzcqb58mzif5ffkjzlb1h2mdvobvvggxj5gvxall86tjmzgnv3wiswer0qyi5tkl2ek549ph67c1ztufz2ll45wc8rdskgd7p3qghnkb20tfctfla0t3xbgaeux77u9c000uxv1dypcmkscd3w4ep07sydqkyn3eqvoyl0pevgfrypq6itxoqloxbxub84ybuf1dpki5b4q2onr967b1d5dx0mgchnyldhhwvqgh6t2ddrevylijdzo83ter2bll41sgqdhr7egj148cg6hebqzfr991al7q3tt1vx6v7nlk56yi1mmob09qdgnuyqw4udypxrzepn4kdzxb5d == \z\j\c\o\p\p\d\j\q\c\q\l\2\v\l\3\6\c\x\9\t\4\g\m\m\e\l\y\1\u\o\i\6\g\2\7\o\6\r\4\c\t\d\x\u\w\y\6\d\a\b\u\f\k\y\a\e\h\h\d\d\n\j\0\0\x\o\h\6\m\5\d\o\3\2\c\r\a\u\h\s\g\w\y\b\i\l\m\x\d\a\0\5\d\f\w\p\g\7\6\0\8\i\r\3\7\o\5\y\q\9\q\s\q\0\m\t\w\7\q\g\i\7\3\s\q\2\x\l\7\q\o\m\6\f\d\b\2\3\l\3\j\u\g\a\j\2\j\6\x\j\n\h\c\6\m\p\i\a\s\y\p\2\x\r\6\z\t\6\u\r\t\0\f\u\z\c\q\b\5\8\m\z\i\f\5\f\f\k\j\z\l\b\1\h\2\m\d\v\o\b\v\v\g\g\x\j\5\g\v\x\a\l\l\8\6\t\j\m\z\g\n\v\3\w\i\s\w\e\r\0\q\y\i\5\t\k\l\2\e\k\5\4\9\p\h\6\7\c\1\z\t\u\f\z\2\l\l\4\5\w\c\8\r\d\s\k\g\d\7\p\3\q\g\h\n\k\b\2\0\t\f\c\t\f\l\a\0\t\3\x\b\g\a\e\u\x\7\7\u\9\c\0\0\0\u\x\v\1\d\y\p\c\m\k\s\c\d\3\w\4\e\p\0\7\s\y\d\q\k\y\n\3\e\q\v\o\y\l\0\p\e\v\g\f\r\y\p\q\6\i\t\x\o\q\l\o\x\b\x\u\b\8\4\y\b\u\f\1\d\p\k\i\5\b\4\q\2\o\n\r\9\6\7\b\1\d\5\d\x\0\m\g\c\h\n\y\l\d\h\h\w\v\q\g\h\6\t\2\d\d\r\e\v\y\l\i\j\d\z\o\8\3\t\e\r\2\b\l\l\4\1\s\g\q\d\h\r\7\e\g\j\1\4\8\c\g\6\h\e\b\q\z\f\r\9\9\1\a\l\7\q\3\t\t\1\v\x\6\v\7\n\l\k\5\6\y\i\1\m\m\o\b\0\9\q\d\g\n\u\y\q\w\4\u\d\y\p\x\r\z\e\p\n\4\k\d\z\x\b\5\d ]] 00:07:05.673 06:37:19 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:05.673 06:37:19 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:05.673 [2024-12-14 06:37:19.588449] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:05.673 [2024-12-14 06:37:19.588540] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58513 ] 00:07:05.932 [2024-12-14 06:37:19.726975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.932 [2024-12-14 06:37:19.785306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.932  [2024-12-14T06:37:20.182Z] Copying: 512/512 [B] (average 500 kBps) 00:07:06.190 00:07:06.191 06:37:20 -- dd/posix.sh@93 -- # [[ zjcoppdjqcql2vl36cx9t4gmmely1uoi6g27o6r4ctdxuwy6dabufkyaehhddnj00xoh6m5do32crauhsgwybilmxda05dfwpg7608ir37o5yq9qsq0mtw7qgi73sq2xl7qom6fdb23l3jugaj2j6xjnhc6mpiasyp2xr6zt6urt0fuzcqb58mzif5ffkjzlb1h2mdvobvvggxj5gvxall86tjmzgnv3wiswer0qyi5tkl2ek549ph67c1ztufz2ll45wc8rdskgd7p3qghnkb20tfctfla0t3xbgaeux77u9c000uxv1dypcmkscd3w4ep07sydqkyn3eqvoyl0pevgfrypq6itxoqloxbxub84ybuf1dpki5b4q2onr967b1d5dx0mgchnyldhhwvqgh6t2ddrevylijdzo83ter2bll41sgqdhr7egj148cg6hebqzfr991al7q3tt1vx6v7nlk56yi1mmob09qdgnuyqw4udypxrzepn4kdzxb5d == \z\j\c\o\p\p\d\j\q\c\q\l\2\v\l\3\6\c\x\9\t\4\g\m\m\e\l\y\1\u\o\i\6\g\2\7\o\6\r\4\c\t\d\x\u\w\y\6\d\a\b\u\f\k\y\a\e\h\h\d\d\n\j\0\0\x\o\h\6\m\5\d\o\3\2\c\r\a\u\h\s\g\w\y\b\i\l\m\x\d\a\0\5\d\f\w\p\g\7\6\0\8\i\r\3\7\o\5\y\q\9\q\s\q\0\m\t\w\7\q\g\i\7\3\s\q\2\x\l\7\q\o\m\6\f\d\b\2\3\l\3\j\u\g\a\j\2\j\6\x\j\n\h\c\6\m\p\i\a\s\y\p\2\x\r\6\z\t\6\u\r\t\0\f\u\z\c\q\b\5\8\m\z\i\f\5\f\f\k\j\z\l\b\1\h\2\m\d\v\o\b\v\v\g\g\x\j\5\g\v\x\a\l\l\8\6\t\j\m\z\g\n\v\3\w\i\s\w\e\r\0\q\y\i\5\t\k\l\2\e\k\5\4\9\p\h\6\7\c\1\z\t\u\f\z\2\l\l\4\5\w\c\8\r\d\s\k\g\d\7\p\3\q\g\h\n\k\b\2\0\t\f\c\t\f\l\a\0\t\3\x\b\g\a\e\u\x\7\7\u\9\c\0\0\0\u\x\v\1\d\y\p\c\m\k\s\c\d\3\w\4\e\p\0\7\s\y\d\q\k\y\n\3\e\q\v\o\y\l\0\p\e\v\g\f\r\y\p\q\6\i\t\x\o\q\l\o\x\b\x\u\b\8\4\y\b\u\f\1\d\p\k\i\5\b\4\q\2\o\n\r\9\6\7\b\1\d\5\d\x\0\m\g\c\h\n\y\l\d\h\h\w\v\q\g\h\6\t\2\d\d\r\e\v\y\l\i\j\d\z\o\8\3\t\e\r\2\b\l\l\4\1\s\g\q\d\h\r\7\e\g\j\1\4\8\c\g\6\h\e\b\q\z\f\r\9\9\1\a\l\7\q\3\t\t\1\v\x\6\v\7\n\l\k\5\6\y\i\1\m\m\o\b\0\9\q\d\g\n\u\y\q\w\4\u\d\y\p\x\r\z\e\p\n\4\k\d\z\x\b\5\d ]] 00:07:06.191 06:37:20 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:06.191 06:37:20 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:06.191 [2024-12-14 06:37:20.092982] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.191 [2024-12-14 06:37:20.093102] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58520 ] 00:07:06.449 [2024-12-14 06:37:20.231669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.449 [2024-12-14 06:37:20.287927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.449  [2024-12-14T06:37:20.699Z] Copying: 512/512 [B] (average 166 kBps) 00:07:06.707 00:07:06.707 06:37:20 -- dd/posix.sh@93 -- # [[ zjcoppdjqcql2vl36cx9t4gmmely1uoi6g27o6r4ctdxuwy6dabufkyaehhddnj00xoh6m5do32crauhsgwybilmxda05dfwpg7608ir37o5yq9qsq0mtw7qgi73sq2xl7qom6fdb23l3jugaj2j6xjnhc6mpiasyp2xr6zt6urt0fuzcqb58mzif5ffkjzlb1h2mdvobvvggxj5gvxall86tjmzgnv3wiswer0qyi5tkl2ek549ph67c1ztufz2ll45wc8rdskgd7p3qghnkb20tfctfla0t3xbgaeux77u9c000uxv1dypcmkscd3w4ep07sydqkyn3eqvoyl0pevgfrypq6itxoqloxbxub84ybuf1dpki5b4q2onr967b1d5dx0mgchnyldhhwvqgh6t2ddrevylijdzo83ter2bll41sgqdhr7egj148cg6hebqzfr991al7q3tt1vx6v7nlk56yi1mmob09qdgnuyqw4udypxrzepn4kdzxb5d == \z\j\c\o\p\p\d\j\q\c\q\l\2\v\l\3\6\c\x\9\t\4\g\m\m\e\l\y\1\u\o\i\6\g\2\7\o\6\r\4\c\t\d\x\u\w\y\6\d\a\b\u\f\k\y\a\e\h\h\d\d\n\j\0\0\x\o\h\6\m\5\d\o\3\2\c\r\a\u\h\s\g\w\y\b\i\l\m\x\d\a\0\5\d\f\w\p\g\7\6\0\8\i\r\3\7\o\5\y\q\9\q\s\q\0\m\t\w\7\q\g\i\7\3\s\q\2\x\l\7\q\o\m\6\f\d\b\2\3\l\3\j\u\g\a\j\2\j\6\x\j\n\h\c\6\m\p\i\a\s\y\p\2\x\r\6\z\t\6\u\r\t\0\f\u\z\c\q\b\5\8\m\z\i\f\5\f\f\k\j\z\l\b\1\h\2\m\d\v\o\b\v\v\g\g\x\j\5\g\v\x\a\l\l\8\6\t\j\m\z\g\n\v\3\w\i\s\w\e\r\0\q\y\i\5\t\k\l\2\e\k\5\4\9\p\h\6\7\c\1\z\t\u\f\z\2\l\l\4\5\w\c\8\r\d\s\k\g\d\7\p\3\q\g\h\n\k\b\2\0\t\f\c\t\f\l\a\0\t\3\x\b\g\a\e\u\x\7\7\u\9\c\0\0\0\u\x\v\1\d\y\p\c\m\k\s\c\d\3\w\4\e\p\0\7\s\y\d\q\k\y\n\3\e\q\v\o\y\l\0\p\e\v\g\f\r\y\p\q\6\i\t\x\o\q\l\o\x\b\x\u\b\8\4\y\b\u\f\1\d\p\k\i\5\b\4\q\2\o\n\r\9\6\7\b\1\d\5\d\x\0\m\g\c\h\n\y\l\d\h\h\w\v\q\g\h\6\t\2\d\d\r\e\v\y\l\i\j\d\z\o\8\3\t\e\r\2\b\l\l\4\1\s\g\q\d\h\r\7\e\g\j\1\4\8\c\g\6\h\e\b\q\z\f\r\9\9\1\a\l\7\q\3\t\t\1\v\x\6\v\7\n\l\k\5\6\y\i\1\m\m\o\b\0\9\q\d\g\n\u\y\q\w\4\u\d\y\p\x\r\z\e\p\n\4\k\d\z\x\b\5\d ]] 00:07:06.707 06:37:20 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:06.707 06:37:20 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:06.707 [2024-12-14 06:37:20.542876] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:06.707 [2024-12-14 06:37:20.543025] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58528 ] 00:07:06.707 [2024-12-14 06:37:20.673665] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.966 [2024-12-14 06:37:20.724963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.966  [2024-12-14T06:37:20.958Z] Copying: 512/512 [B] (average 250 kBps) 00:07:06.966 00:07:06.966 06:37:20 -- dd/posix.sh@93 -- # [[ zjcoppdjqcql2vl36cx9t4gmmely1uoi6g27o6r4ctdxuwy6dabufkyaehhddnj00xoh6m5do32crauhsgwybilmxda05dfwpg7608ir37o5yq9qsq0mtw7qgi73sq2xl7qom6fdb23l3jugaj2j6xjnhc6mpiasyp2xr6zt6urt0fuzcqb58mzif5ffkjzlb1h2mdvobvvggxj5gvxall86tjmzgnv3wiswer0qyi5tkl2ek549ph67c1ztufz2ll45wc8rdskgd7p3qghnkb20tfctfla0t3xbgaeux77u9c000uxv1dypcmkscd3w4ep07sydqkyn3eqvoyl0pevgfrypq6itxoqloxbxub84ybuf1dpki5b4q2onr967b1d5dx0mgchnyldhhwvqgh6t2ddrevylijdzo83ter2bll41sgqdhr7egj148cg6hebqzfr991al7q3tt1vx6v7nlk56yi1mmob09qdgnuyqw4udypxrzepn4kdzxb5d == \z\j\c\o\p\p\d\j\q\c\q\l\2\v\l\3\6\c\x\9\t\4\g\m\m\e\l\y\1\u\o\i\6\g\2\7\o\6\r\4\c\t\d\x\u\w\y\6\d\a\b\u\f\k\y\a\e\h\h\d\d\n\j\0\0\x\o\h\6\m\5\d\o\3\2\c\r\a\u\h\s\g\w\y\b\i\l\m\x\d\a\0\5\d\f\w\p\g\7\6\0\8\i\r\3\7\o\5\y\q\9\q\s\q\0\m\t\w\7\q\g\i\7\3\s\q\2\x\l\7\q\o\m\6\f\d\b\2\3\l\3\j\u\g\a\j\2\j\6\x\j\n\h\c\6\m\p\i\a\s\y\p\2\x\r\6\z\t\6\u\r\t\0\f\u\z\c\q\b\5\8\m\z\i\f\5\f\f\k\j\z\l\b\1\h\2\m\d\v\o\b\v\v\g\g\x\j\5\g\v\x\a\l\l\8\6\t\j\m\z\g\n\v\3\w\i\s\w\e\r\0\q\y\i\5\t\k\l\2\e\k\5\4\9\p\h\6\7\c\1\z\t\u\f\z\2\l\l\4\5\w\c\8\r\d\s\k\g\d\7\p\3\q\g\h\n\k\b\2\0\t\f\c\t\f\l\a\0\t\3\x\b\g\a\e\u\x\7\7\u\9\c\0\0\0\u\x\v\1\d\y\p\c\m\k\s\c\d\3\w\4\e\p\0\7\s\y\d\q\k\y\n\3\e\q\v\o\y\l\0\p\e\v\g\f\r\y\p\q\6\i\t\x\o\q\l\o\x\b\x\u\b\8\4\y\b\u\f\1\d\p\k\i\5\b\4\q\2\o\n\r\9\6\7\b\1\d\5\d\x\0\m\g\c\h\n\y\l\d\h\h\w\v\q\g\h\6\t\2\d\d\r\e\v\y\l\i\j\d\z\o\8\3\t\e\r\2\b\l\l\4\1\s\g\q\d\h\r\7\e\g\j\1\4\8\c\g\6\h\e\b\q\z\f\r\9\9\1\a\l\7\q\3\t\t\1\v\x\6\v\7\n\l\k\5\6\y\i\1\m\m\o\b\0\9\q\d\g\n\u\y\q\w\4\u\d\y\p\x\r\z\e\p\n\4\k\d\z\x\b\5\d ]] 00:07:06.966 06:37:20 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:06.966 06:37:20 -- dd/posix.sh@86 -- # gen_bytes 512 00:07:06.966 06:37:20 -- dd/common.sh@98 -- # xtrace_disable 00:07:06.966 06:37:20 -- common/autotest_common.sh@10 -- # set +x 00:07:07.225 06:37:20 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:07.225 06:37:20 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:07.225 [2024-12-14 06:37:21.016807] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:07.225 [2024-12-14 06:37:21.016927] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58530 ] 00:07:07.225 [2024-12-14 06:37:21.154423] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.225 [2024-12-14 06:37:21.205032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.484  [2024-12-14T06:37:21.476Z] Copying: 512/512 [B] (average 500 kBps) 00:07:07.484 00:07:07.484 06:37:21 -- dd/posix.sh@93 -- # [[ 34ioyuaewomt730ip6zes4x8ufk2f5913gozhq9b34cb6v0synr5elohdolagrpyhvo8pndhfeqy3frskaw6lrd956cc2934l6rsvz7pcg2t5jtlub4yfevqg2eusvx8mmggh25mxpe41sx9s0acc3lws0gl1tjrudfnxyqz6tyjyd75sjk5ocb65ltbzgtq0r8di6gzfbphwsy8wvjacca1htmszftrzh66gvin8f0h0xatzy58ws8y8d1nclrs5xwu9ectz5u0hmpm8rcj25rx1bumecjjacu9xgvjj8xz7qv540jbkfj4nweym8s3d9j2zbg8rw0sy8q6e43qeqp1hultglkj47juif10ccu1qt31vkw7ou3gk07gmmod0ucvz2a0s228kepkxhaaobgeghhn6dl7uk4quzmfntxtqiv6h544zmza88hi0evnf0lem9p5bhptoxo1cqrhqb10xom8accg8eff06qifauv0d8szj2pzlg4zrunfvqu == \3\4\i\o\y\u\a\e\w\o\m\t\7\3\0\i\p\6\z\e\s\4\x\8\u\f\k\2\f\5\9\1\3\g\o\z\h\q\9\b\3\4\c\b\6\v\0\s\y\n\r\5\e\l\o\h\d\o\l\a\g\r\p\y\h\v\o\8\p\n\d\h\f\e\q\y\3\f\r\s\k\a\w\6\l\r\d\9\5\6\c\c\2\9\3\4\l\6\r\s\v\z\7\p\c\g\2\t\5\j\t\l\u\b\4\y\f\e\v\q\g\2\e\u\s\v\x\8\m\m\g\g\h\2\5\m\x\p\e\4\1\s\x\9\s\0\a\c\c\3\l\w\s\0\g\l\1\t\j\r\u\d\f\n\x\y\q\z\6\t\y\j\y\d\7\5\s\j\k\5\o\c\b\6\5\l\t\b\z\g\t\q\0\r\8\d\i\6\g\z\f\b\p\h\w\s\y\8\w\v\j\a\c\c\a\1\h\t\m\s\z\f\t\r\z\h\6\6\g\v\i\n\8\f\0\h\0\x\a\t\z\y\5\8\w\s\8\y\8\d\1\n\c\l\r\s\5\x\w\u\9\e\c\t\z\5\u\0\h\m\p\m\8\r\c\j\2\5\r\x\1\b\u\m\e\c\j\j\a\c\u\9\x\g\v\j\j\8\x\z\7\q\v\5\4\0\j\b\k\f\j\4\n\w\e\y\m\8\s\3\d\9\j\2\z\b\g\8\r\w\0\s\y\8\q\6\e\4\3\q\e\q\p\1\h\u\l\t\g\l\k\j\4\7\j\u\i\f\1\0\c\c\u\1\q\t\3\1\v\k\w\7\o\u\3\g\k\0\7\g\m\m\o\d\0\u\c\v\z\2\a\0\s\2\2\8\k\e\p\k\x\h\a\a\o\b\g\e\g\h\h\n\6\d\l\7\u\k\4\q\u\z\m\f\n\t\x\t\q\i\v\6\h\5\4\4\z\m\z\a\8\8\h\i\0\e\v\n\f\0\l\e\m\9\p\5\b\h\p\t\o\x\o\1\c\q\r\h\q\b\1\0\x\o\m\8\a\c\c\g\8\e\f\f\0\6\q\i\f\a\u\v\0\d\8\s\z\j\2\p\z\l\g\4\z\r\u\n\f\v\q\u ]] 00:07:07.484 06:37:21 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:07.484 06:37:21 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:07.743 [2024-12-14 06:37:21.482149] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:07.743 [2024-12-14 06:37:21.482263] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58543 ] 00:07:07.743 [2024-12-14 06:37:21.617804] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.743 [2024-12-14 06:37:21.664469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.743  [2024-12-14T06:37:21.993Z] Copying: 512/512 [B] (average 500 kBps) 00:07:08.001 00:07:08.001 06:37:21 -- dd/posix.sh@93 -- # [[ 34ioyuaewomt730ip6zes4x8ufk2f5913gozhq9b34cb6v0synr5elohdolagrpyhvo8pndhfeqy3frskaw6lrd956cc2934l6rsvz7pcg2t5jtlub4yfevqg2eusvx8mmggh25mxpe41sx9s0acc3lws0gl1tjrudfnxyqz6tyjyd75sjk5ocb65ltbzgtq0r8di6gzfbphwsy8wvjacca1htmszftrzh66gvin8f0h0xatzy58ws8y8d1nclrs5xwu9ectz5u0hmpm8rcj25rx1bumecjjacu9xgvjj8xz7qv540jbkfj4nweym8s3d9j2zbg8rw0sy8q6e43qeqp1hultglkj47juif10ccu1qt31vkw7ou3gk07gmmod0ucvz2a0s228kepkxhaaobgeghhn6dl7uk4quzmfntxtqiv6h544zmza88hi0evnf0lem9p5bhptoxo1cqrhqb10xom8accg8eff06qifauv0d8szj2pzlg4zrunfvqu == \3\4\i\o\y\u\a\e\w\o\m\t\7\3\0\i\p\6\z\e\s\4\x\8\u\f\k\2\f\5\9\1\3\g\o\z\h\q\9\b\3\4\c\b\6\v\0\s\y\n\r\5\e\l\o\h\d\o\l\a\g\r\p\y\h\v\o\8\p\n\d\h\f\e\q\y\3\f\r\s\k\a\w\6\l\r\d\9\5\6\c\c\2\9\3\4\l\6\r\s\v\z\7\p\c\g\2\t\5\j\t\l\u\b\4\y\f\e\v\q\g\2\e\u\s\v\x\8\m\m\g\g\h\2\5\m\x\p\e\4\1\s\x\9\s\0\a\c\c\3\l\w\s\0\g\l\1\t\j\r\u\d\f\n\x\y\q\z\6\t\y\j\y\d\7\5\s\j\k\5\o\c\b\6\5\l\t\b\z\g\t\q\0\r\8\d\i\6\g\z\f\b\p\h\w\s\y\8\w\v\j\a\c\c\a\1\h\t\m\s\z\f\t\r\z\h\6\6\g\v\i\n\8\f\0\h\0\x\a\t\z\y\5\8\w\s\8\y\8\d\1\n\c\l\r\s\5\x\w\u\9\e\c\t\z\5\u\0\h\m\p\m\8\r\c\j\2\5\r\x\1\b\u\m\e\c\j\j\a\c\u\9\x\g\v\j\j\8\x\z\7\q\v\5\4\0\j\b\k\f\j\4\n\w\e\y\m\8\s\3\d\9\j\2\z\b\g\8\r\w\0\s\y\8\q\6\e\4\3\q\e\q\p\1\h\u\l\t\g\l\k\j\4\7\j\u\i\f\1\0\c\c\u\1\q\t\3\1\v\k\w\7\o\u\3\g\k\0\7\g\m\m\o\d\0\u\c\v\z\2\a\0\s\2\2\8\k\e\p\k\x\h\a\a\o\b\g\e\g\h\h\n\6\d\l\7\u\k\4\q\u\z\m\f\n\t\x\t\q\i\v\6\h\5\4\4\z\m\z\a\8\8\h\i\0\e\v\n\f\0\l\e\m\9\p\5\b\h\p\t\o\x\o\1\c\q\r\h\q\b\1\0\x\o\m\8\a\c\c\g\8\e\f\f\0\6\q\i\f\a\u\v\0\d\8\s\z\j\2\p\z\l\g\4\z\r\u\n\f\v\q\u ]] 00:07:08.001 06:37:21 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:08.001 06:37:21 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:08.001 [2024-12-14 06:37:21.922450] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:08.001 [2024-12-14 06:37:21.922549] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58545 ] 00:07:08.260 [2024-12-14 06:37:22.059617] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.260 [2024-12-14 06:37:22.114377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.260  [2024-12-14T06:37:22.511Z] Copying: 512/512 [B] (average 250 kBps) 00:07:08.519 00:07:08.520 06:37:22 -- dd/posix.sh@93 -- # [[ 34ioyuaewomt730ip6zes4x8ufk2f5913gozhq9b34cb6v0synr5elohdolagrpyhvo8pndhfeqy3frskaw6lrd956cc2934l6rsvz7pcg2t5jtlub4yfevqg2eusvx8mmggh25mxpe41sx9s0acc3lws0gl1tjrudfnxyqz6tyjyd75sjk5ocb65ltbzgtq0r8di6gzfbphwsy8wvjacca1htmszftrzh66gvin8f0h0xatzy58ws8y8d1nclrs5xwu9ectz5u0hmpm8rcj25rx1bumecjjacu9xgvjj8xz7qv540jbkfj4nweym8s3d9j2zbg8rw0sy8q6e43qeqp1hultglkj47juif10ccu1qt31vkw7ou3gk07gmmod0ucvz2a0s228kepkxhaaobgeghhn6dl7uk4quzmfntxtqiv6h544zmza88hi0evnf0lem9p5bhptoxo1cqrhqb10xom8accg8eff06qifauv0d8szj2pzlg4zrunfvqu == \3\4\i\o\y\u\a\e\w\o\m\t\7\3\0\i\p\6\z\e\s\4\x\8\u\f\k\2\f\5\9\1\3\g\o\z\h\q\9\b\3\4\c\b\6\v\0\s\y\n\r\5\e\l\o\h\d\o\l\a\g\r\p\y\h\v\o\8\p\n\d\h\f\e\q\y\3\f\r\s\k\a\w\6\l\r\d\9\5\6\c\c\2\9\3\4\l\6\r\s\v\z\7\p\c\g\2\t\5\j\t\l\u\b\4\y\f\e\v\q\g\2\e\u\s\v\x\8\m\m\g\g\h\2\5\m\x\p\e\4\1\s\x\9\s\0\a\c\c\3\l\w\s\0\g\l\1\t\j\r\u\d\f\n\x\y\q\z\6\t\y\j\y\d\7\5\s\j\k\5\o\c\b\6\5\l\t\b\z\g\t\q\0\r\8\d\i\6\g\z\f\b\p\h\w\s\y\8\w\v\j\a\c\c\a\1\h\t\m\s\z\f\t\r\z\h\6\6\g\v\i\n\8\f\0\h\0\x\a\t\z\y\5\8\w\s\8\y\8\d\1\n\c\l\r\s\5\x\w\u\9\e\c\t\z\5\u\0\h\m\p\m\8\r\c\j\2\5\r\x\1\b\u\m\e\c\j\j\a\c\u\9\x\g\v\j\j\8\x\z\7\q\v\5\4\0\j\b\k\f\j\4\n\w\e\y\m\8\s\3\d\9\j\2\z\b\g\8\r\w\0\s\y\8\q\6\e\4\3\q\e\q\p\1\h\u\l\t\g\l\k\j\4\7\j\u\i\f\1\0\c\c\u\1\q\t\3\1\v\k\w\7\o\u\3\g\k\0\7\g\m\m\o\d\0\u\c\v\z\2\a\0\s\2\2\8\k\e\p\k\x\h\a\a\o\b\g\e\g\h\h\n\6\d\l\7\u\k\4\q\u\z\m\f\n\t\x\t\q\i\v\6\h\5\4\4\z\m\z\a\8\8\h\i\0\e\v\n\f\0\l\e\m\9\p\5\b\h\p\t\o\x\o\1\c\q\r\h\q\b\1\0\x\o\m\8\a\c\c\g\8\e\f\f\0\6\q\i\f\a\u\v\0\d\8\s\z\j\2\p\z\l\g\4\z\r\u\n\f\v\q\u ]] 00:07:08.520 06:37:22 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:08.520 06:37:22 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:08.520 [2024-12-14 06:37:22.396754] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:08.520 [2024-12-14 06:37:22.396848] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58552 ] 00:07:08.779 [2024-12-14 06:37:22.532475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.779 [2024-12-14 06:37:22.594567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.779  [2024-12-14T06:37:23.029Z] Copying: 512/512 [B] (average 250 kBps) 00:07:09.037 00:07:09.038 06:37:22 -- dd/posix.sh@93 -- # [[ 34ioyuaewomt730ip6zes4x8ufk2f5913gozhq9b34cb6v0synr5elohdolagrpyhvo8pndhfeqy3frskaw6lrd956cc2934l6rsvz7pcg2t5jtlub4yfevqg2eusvx8mmggh25mxpe41sx9s0acc3lws0gl1tjrudfnxyqz6tyjyd75sjk5ocb65ltbzgtq0r8di6gzfbphwsy8wvjacca1htmszftrzh66gvin8f0h0xatzy58ws8y8d1nclrs5xwu9ectz5u0hmpm8rcj25rx1bumecjjacu9xgvjj8xz7qv540jbkfj4nweym8s3d9j2zbg8rw0sy8q6e43qeqp1hultglkj47juif10ccu1qt31vkw7ou3gk07gmmod0ucvz2a0s228kepkxhaaobgeghhn6dl7uk4quzmfntxtqiv6h544zmza88hi0evnf0lem9p5bhptoxo1cqrhqb10xom8accg8eff06qifauv0d8szj2pzlg4zrunfvqu == \3\4\i\o\y\u\a\e\w\o\m\t\7\3\0\i\p\6\z\e\s\4\x\8\u\f\k\2\f\5\9\1\3\g\o\z\h\q\9\b\3\4\c\b\6\v\0\s\y\n\r\5\e\l\o\h\d\o\l\a\g\r\p\y\h\v\o\8\p\n\d\h\f\e\q\y\3\f\r\s\k\a\w\6\l\r\d\9\5\6\c\c\2\9\3\4\l\6\r\s\v\z\7\p\c\g\2\t\5\j\t\l\u\b\4\y\f\e\v\q\g\2\e\u\s\v\x\8\m\m\g\g\h\2\5\m\x\p\e\4\1\s\x\9\s\0\a\c\c\3\l\w\s\0\g\l\1\t\j\r\u\d\f\n\x\y\q\z\6\t\y\j\y\d\7\5\s\j\k\5\o\c\b\6\5\l\t\b\z\g\t\q\0\r\8\d\i\6\g\z\f\b\p\h\w\s\y\8\w\v\j\a\c\c\a\1\h\t\m\s\z\f\t\r\z\h\6\6\g\v\i\n\8\f\0\h\0\x\a\t\z\y\5\8\w\s\8\y\8\d\1\n\c\l\r\s\5\x\w\u\9\e\c\t\z\5\u\0\h\m\p\m\8\r\c\j\2\5\r\x\1\b\u\m\e\c\j\j\a\c\u\9\x\g\v\j\j\8\x\z\7\q\v\5\4\0\j\b\k\f\j\4\n\w\e\y\m\8\s\3\d\9\j\2\z\b\g\8\r\w\0\s\y\8\q\6\e\4\3\q\e\q\p\1\h\u\l\t\g\l\k\j\4\7\j\u\i\f\1\0\c\c\u\1\q\t\3\1\v\k\w\7\o\u\3\g\k\0\7\g\m\m\o\d\0\u\c\v\z\2\a\0\s\2\2\8\k\e\p\k\x\h\a\a\o\b\g\e\g\h\h\n\6\d\l\7\u\k\4\q\u\z\m\f\n\t\x\t\q\i\v\6\h\5\4\4\z\m\z\a\8\8\h\i\0\e\v\n\f\0\l\e\m\9\p\5\b\h\p\t\o\x\o\1\c\q\r\h\q\b\1\0\x\o\m\8\a\c\c\g\8\e\f\f\0\6\q\i\f\a\u\v\0\d\8\s\z\j\2\p\z\l\g\4\z\r\u\n\f\v\q\u ]] 00:07:09.038 00:07:09.038 real 0m3.766s 00:07:09.038 user 0m2.065s 00:07:09.038 sys 0m0.731s 00:07:09.038 06:37:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:09.038 06:37:22 -- common/autotest_common.sh@10 -- # set +x 00:07:09.038 ************************************ 00:07:09.038 END TEST dd_flags_misc_forced_aio 00:07:09.038 ************************************ 00:07:09.038 06:37:22 -- dd/posix.sh@1 -- # cleanup 00:07:09.038 06:37:22 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:09.038 06:37:22 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:09.038 ************************************ 00:07:09.038 END TEST spdk_dd_posix 00:07:09.038 ************************************ 00:07:09.038 00:07:09.038 real 0m17.489s 00:07:09.038 user 0m8.296s 00:07:09.038 sys 0m3.375s 00:07:09.038 06:37:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:09.038 06:37:22 -- common/autotest_common.sh@10 -- # set +x 00:07:09.038 06:37:22 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:09.038 06:37:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:09.038 06:37:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.038 06:37:22 -- common/autotest_common.sh@10 -- # set +x 00:07:09.038 ************************************ 00:07:09.038 START TEST spdk_dd_malloc 00:07:09.038 ************************************ 00:07:09.038 06:37:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:09.038 * Looking for test storage... 00:07:09.298 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:09.298 06:37:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:09.298 06:37:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:09.298 06:37:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:09.298 06:37:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:09.298 06:37:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:09.298 06:37:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:09.298 06:37:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:09.298 06:37:23 -- scripts/common.sh@335 -- # IFS=.-: 00:07:09.298 06:37:23 -- scripts/common.sh@335 -- # read -ra ver1 00:07:09.298 06:37:23 -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.298 06:37:23 -- scripts/common.sh@336 -- # read -ra ver2 00:07:09.298 06:37:23 -- scripts/common.sh@337 -- # local 'op=<' 00:07:09.298 06:37:23 -- scripts/common.sh@339 -- # ver1_l=2 00:07:09.298 06:37:23 -- scripts/common.sh@340 -- # ver2_l=1 00:07:09.298 06:37:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:09.298 06:37:23 -- scripts/common.sh@343 -- # case "$op" in 00:07:09.298 06:37:23 -- scripts/common.sh@344 -- # : 1 00:07:09.298 06:37:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:09.298 06:37:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.298 06:37:23 -- scripts/common.sh@364 -- # decimal 1 00:07:09.298 06:37:23 -- scripts/common.sh@352 -- # local d=1 00:07:09.298 06:37:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.298 06:37:23 -- scripts/common.sh@354 -- # echo 1 00:07:09.298 06:37:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:09.298 06:37:23 -- scripts/common.sh@365 -- # decimal 2 00:07:09.298 06:37:23 -- scripts/common.sh@352 -- # local d=2 00:07:09.298 06:37:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.298 06:37:23 -- scripts/common.sh@354 -- # echo 2 00:07:09.298 06:37:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:09.298 06:37:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:09.298 06:37:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:09.298 06:37:23 -- scripts/common.sh@367 -- # return 0 00:07:09.298 06:37:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.298 06:37:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:09.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.298 --rc genhtml_branch_coverage=1 00:07:09.298 --rc genhtml_function_coverage=1 00:07:09.298 --rc genhtml_legend=1 00:07:09.298 --rc geninfo_all_blocks=1 00:07:09.298 --rc geninfo_unexecuted_blocks=1 00:07:09.298 00:07:09.298 ' 00:07:09.298 06:37:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:09.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.298 --rc genhtml_branch_coverage=1 00:07:09.298 --rc genhtml_function_coverage=1 00:07:09.298 --rc genhtml_legend=1 00:07:09.298 --rc geninfo_all_blocks=1 00:07:09.298 --rc geninfo_unexecuted_blocks=1 00:07:09.298 00:07:09.298 ' 00:07:09.298 06:37:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:09.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.298 --rc genhtml_branch_coverage=1 00:07:09.298 --rc genhtml_function_coverage=1 00:07:09.298 --rc genhtml_legend=1 00:07:09.298 --rc geninfo_all_blocks=1 00:07:09.298 --rc geninfo_unexecuted_blocks=1 00:07:09.298 00:07:09.298 ' 00:07:09.298 06:37:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:09.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.298 --rc genhtml_branch_coverage=1 00:07:09.298 --rc genhtml_function_coverage=1 00:07:09.298 --rc genhtml_legend=1 00:07:09.298 --rc geninfo_all_blocks=1 00:07:09.298 --rc geninfo_unexecuted_blocks=1 00:07:09.298 00:07:09.298 ' 00:07:09.298 06:37:23 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:09.298 06:37:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.298 06:37:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.298 06:37:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.298 06:37:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.298 06:37:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.298 06:37:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.298 06:37:23 -- paths/export.sh@5 -- # export PATH 00:07:09.298 06:37:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.298 06:37:23 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:09.298 06:37:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:09.298 06:37:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.298 06:37:23 -- common/autotest_common.sh@10 -- # set +x 00:07:09.298 ************************************ 00:07:09.298 START TEST dd_malloc_copy 00:07:09.298 ************************************ 00:07:09.298 06:37:23 -- common/autotest_common.sh@1114 -- # malloc_copy 00:07:09.298 06:37:23 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:09.298 06:37:23 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:09.298 06:37:23 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:09.298 06:37:23 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:09.298 06:37:23 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:09.298 06:37:23 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:09.298 06:37:23 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:09.298 06:37:23 -- dd/malloc.sh@28 -- # gen_conf 00:07:09.298 06:37:23 -- dd/common.sh@31 -- # xtrace_disable 00:07:09.298 06:37:23 -- common/autotest_common.sh@10 -- # set +x 00:07:09.298 [2024-12-14 06:37:23.198749] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:09.298 [2024-12-14 06:37:23.198854] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58633 ] 00:07:09.298 { 00:07:09.298 "subsystems": [ 00:07:09.298 { 00:07:09.298 "subsystem": "bdev", 00:07:09.298 "config": [ 00:07:09.298 { 00:07:09.298 "params": { 00:07:09.298 "block_size": 512, 00:07:09.298 "num_blocks": 1048576, 00:07:09.298 "name": "malloc0" 00:07:09.298 }, 00:07:09.298 "method": "bdev_malloc_create" 00:07:09.298 }, 00:07:09.298 { 00:07:09.298 "params": { 00:07:09.298 "block_size": 512, 00:07:09.298 "num_blocks": 1048576, 00:07:09.298 "name": "malloc1" 00:07:09.298 }, 00:07:09.298 "method": "bdev_malloc_create" 00:07:09.298 }, 00:07:09.298 { 00:07:09.298 "method": "bdev_wait_for_examine" 00:07:09.298 } 00:07:09.298 ] 00:07:09.298 } 00:07:09.298 ] 00:07:09.298 } 00:07:09.558 [2024-12-14 06:37:23.328694] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.558 [2024-12-14 06:37:23.378555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.992  [2024-12-14T06:37:25.920Z] Copying: 237/512 [MB] (237 MBps) [2024-12-14T06:37:25.920Z] Copying: 442/512 [MB] (205 MBps) [2024-12-14T06:37:26.488Z] Copying: 512/512 [MB] (average 224 MBps) 00:07:12.496 00:07:12.496 06:37:26 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:12.496 06:37:26 -- dd/malloc.sh@33 -- # gen_conf 00:07:12.496 06:37:26 -- dd/common.sh@31 -- # xtrace_disable 00:07:12.496 06:37:26 -- common/autotest_common.sh@10 -- # set +x 00:07:12.496 [2024-12-14 06:37:26.299488] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:12.496 [2024-12-14 06:37:26.299597] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58670 ] 00:07:12.496 { 00:07:12.496 "subsystems": [ 00:07:12.496 { 00:07:12.496 "subsystem": "bdev", 00:07:12.496 "config": [ 00:07:12.496 { 00:07:12.496 "params": { 00:07:12.496 "block_size": 512, 00:07:12.496 "num_blocks": 1048576, 00:07:12.496 "name": "malloc0" 00:07:12.496 }, 00:07:12.496 "method": "bdev_malloc_create" 00:07:12.496 }, 00:07:12.496 { 00:07:12.496 "params": { 00:07:12.496 "block_size": 512, 00:07:12.496 "num_blocks": 1048576, 00:07:12.496 "name": "malloc1" 00:07:12.496 }, 00:07:12.496 "method": "bdev_malloc_create" 00:07:12.496 }, 00:07:12.496 { 00:07:12.496 "method": "bdev_wait_for_examine" 00:07:12.496 } 00:07:12.496 ] 00:07:12.496 } 00:07:12.496 ] 00:07:12.496 } 00:07:12.496 [2024-12-14 06:37:26.438137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.754 [2024-12-14 06:37:26.491846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.132  [2024-12-14T06:37:29.061Z] Copying: 233/512 [MB] (233 MBps) [2024-12-14T06:37:29.061Z] Copying: 443/512 [MB] (210 MBps) [2024-12-14T06:37:29.629Z] Copying: 512/512 [MB] (average 224 MBps) 00:07:15.637 00:07:15.637 00:07:15.637 real 0m6.194s 00:07:15.637 user 0m5.570s 00:07:15.637 sys 0m0.472s 00:07:15.637 06:37:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:15.637 06:37:29 -- common/autotest_common.sh@10 -- # set +x 00:07:15.637 ************************************ 00:07:15.637 END TEST dd_malloc_copy 00:07:15.637 ************************************ 00:07:15.637 00:07:15.637 real 0m6.436s 00:07:15.637 user 0m5.708s 00:07:15.637 sys 0m0.572s 00:07:15.637 06:37:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:15.637 06:37:29 -- common/autotest_common.sh@10 -- # set +x 00:07:15.637 ************************************ 00:07:15.637 END TEST spdk_dd_malloc 00:07:15.637 ************************************ 00:07:15.637 06:37:29 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:07:15.637 06:37:29 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:15.637 06:37:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.637 06:37:29 -- common/autotest_common.sh@10 -- # set +x 00:07:15.637 ************************************ 00:07:15.637 START TEST spdk_dd_bdev_to_bdev 00:07:15.637 ************************************ 00:07:15.637 06:37:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:07:15.637 * Looking for test storage... 00:07:15.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:15.637 06:37:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:15.637 06:37:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:15.638 06:37:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:15.638 06:37:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:15.638 06:37:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:15.638 06:37:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:15.638 06:37:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:15.638 06:37:29 -- scripts/common.sh@335 -- # IFS=.-: 00:07:15.638 06:37:29 -- scripts/common.sh@335 -- # read -ra ver1 00:07:15.638 06:37:29 -- scripts/common.sh@336 -- # IFS=.-: 00:07:15.638 06:37:29 -- scripts/common.sh@336 -- # read -ra ver2 00:07:15.638 06:37:29 -- scripts/common.sh@337 -- # local 'op=<' 00:07:15.638 06:37:29 -- scripts/common.sh@339 -- # ver1_l=2 00:07:15.638 06:37:29 -- scripts/common.sh@340 -- # ver2_l=1 00:07:15.638 06:37:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:15.638 06:37:29 -- scripts/common.sh@343 -- # case "$op" in 00:07:15.638 06:37:29 -- scripts/common.sh@344 -- # : 1 00:07:15.638 06:37:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:15.638 06:37:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.638 06:37:29 -- scripts/common.sh@364 -- # decimal 1 00:07:15.638 06:37:29 -- scripts/common.sh@352 -- # local d=1 00:07:15.638 06:37:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.638 06:37:29 -- scripts/common.sh@354 -- # echo 1 00:07:15.638 06:37:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:15.638 06:37:29 -- scripts/common.sh@365 -- # decimal 2 00:07:15.638 06:37:29 -- scripts/common.sh@352 -- # local d=2 00:07:15.638 06:37:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.638 06:37:29 -- scripts/common.sh@354 -- # echo 2 00:07:15.638 06:37:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:15.638 06:37:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:15.638 06:37:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:15.638 06:37:29 -- scripts/common.sh@367 -- # return 0 00:07:15.638 06:37:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.638 06:37:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:15.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.638 --rc genhtml_branch_coverage=1 00:07:15.638 --rc genhtml_function_coverage=1 00:07:15.638 --rc genhtml_legend=1 00:07:15.638 --rc geninfo_all_blocks=1 00:07:15.638 --rc geninfo_unexecuted_blocks=1 00:07:15.638 00:07:15.638 ' 00:07:15.638 06:37:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:15.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.638 --rc genhtml_branch_coverage=1 00:07:15.638 --rc genhtml_function_coverage=1 00:07:15.638 --rc genhtml_legend=1 00:07:15.638 --rc geninfo_all_blocks=1 00:07:15.638 --rc geninfo_unexecuted_blocks=1 00:07:15.638 00:07:15.638 ' 00:07:15.638 06:37:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:15.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.638 --rc genhtml_branch_coverage=1 00:07:15.638 --rc genhtml_function_coverage=1 00:07:15.638 --rc genhtml_legend=1 00:07:15.638 --rc geninfo_all_blocks=1 00:07:15.638 --rc geninfo_unexecuted_blocks=1 00:07:15.638 00:07:15.638 ' 00:07:15.638 06:37:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:15.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.638 --rc genhtml_branch_coverage=1 00:07:15.638 --rc genhtml_function_coverage=1 00:07:15.638 --rc genhtml_legend=1 00:07:15.638 --rc geninfo_all_blocks=1 00:07:15.638 --rc geninfo_unexecuted_blocks=1 00:07:15.638 00:07:15.638 ' 00:07:15.638 06:37:29 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:15.638 06:37:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.638 06:37:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.638 06:37:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.638 06:37:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.638 06:37:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.638 06:37:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.638 06:37:29 -- paths/export.sh@5 -- # export PATH 00:07:15.638 06:37:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.638 06:37:29 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:15.638 06:37:29 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:15.638 06:37:29 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:15.638 06:37:29 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:15.638 06:37:29 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:15.638 06:37:29 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:15.638 06:37:29 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:06.0 00:07:15.638 06:37:29 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:15.638 06:37:29 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:15.638 06:37:29 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:07.0 00:07:15.638 06:37:29 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:07:15.638 06:37:29 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:15.638 06:37:29 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:07.0' ['trtype']='pcie') 00:07:15.638 06:37:29 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:15.638 06:37:29 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:15.638 06:37:29 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:15.898 06:37:29 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:15.898 06:37:29 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:15.898 06:37:29 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:15.898 06:37:29 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:15.898 06:37:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.898 06:37:29 -- common/autotest_common.sh@10 -- # set +x 00:07:15.898 ************************************ 00:07:15.898 START TEST dd_inflate_file 00:07:15.898 ************************************ 00:07:15.898 06:37:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:15.898 [2024-12-14 06:37:29.686256] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:15.898 [2024-12-14 06:37:29.686387] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58788 ] 00:07:15.898 [2024-12-14 06:37:29.825001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.157 [2024-12-14 06:37:29.893235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.157  [2024-12-14T06:37:30.408Z] Copying: 64/64 [MB] (average 1684 MBps) 00:07:16.416 00:07:16.416 00:07:16.416 real 0m0.553s 00:07:16.416 user 0m0.292s 00:07:16.416 sys 0m0.143s 00:07:16.416 06:37:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.416 ************************************ 00:07:16.416 END TEST dd_inflate_file 00:07:16.416 ************************************ 00:07:16.416 06:37:30 -- common/autotest_common.sh@10 -- # set +x 00:07:16.416 06:37:30 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:16.416 06:37:30 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:16.416 06:37:30 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:16.416 06:37:30 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:16.416 06:37:30 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:16.416 06:37:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.416 06:37:30 -- common/autotest_common.sh@10 -- # set +x 00:07:16.416 06:37:30 -- dd/common.sh@31 -- # xtrace_disable 00:07:16.416 06:37:30 -- common/autotest_common.sh@10 -- # set +x 00:07:16.416 ************************************ 00:07:16.416 START TEST dd_copy_to_out_bdev 00:07:16.416 ************************************ 00:07:16.416 06:37:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:16.416 { 00:07:16.416 "subsystems": [ 00:07:16.416 { 00:07:16.416 "subsystem": "bdev", 00:07:16.416 "config": [ 00:07:16.416 { 00:07:16.416 "params": { 00:07:16.416 "trtype": "pcie", 00:07:16.416 "traddr": "0000:00:06.0", 00:07:16.416 "name": "Nvme0" 00:07:16.416 }, 00:07:16.416 "method": "bdev_nvme_attach_controller" 00:07:16.416 }, 00:07:16.416 { 00:07:16.416 "params": { 00:07:16.416 "trtype": "pcie", 00:07:16.416 "traddr": "0000:00:07.0", 00:07:16.416 "name": "Nvme1" 00:07:16.416 }, 00:07:16.416 "method": "bdev_nvme_attach_controller" 00:07:16.416 }, 00:07:16.416 { 00:07:16.416 "method": "bdev_wait_for_examine" 00:07:16.416 } 00:07:16.416 ] 00:07:16.416 } 00:07:16.416 ] 00:07:16.416 } 00:07:16.416 [2024-12-14 06:37:30.300185] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:16.416 [2024-12-14 06:37:30.300284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58816 ] 00:07:16.675 [2024-12-14 06:37:30.440152] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.675 [2024-12-14 06:37:30.492139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.054  [2024-12-14T06:37:32.046Z] Copying: 49/64 [MB] (49 MBps) [2024-12-14T06:37:32.305Z] Copying: 64/64 [MB] (average 49 MBps) 00:07:18.313 00:07:18.313 00:07:18.313 real 0m1.972s 00:07:18.313 user 0m1.740s 00:07:18.313 sys 0m0.164s 00:07:18.313 06:37:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:18.313 ************************************ 00:07:18.313 END TEST dd_copy_to_out_bdev 00:07:18.313 ************************************ 00:07:18.313 06:37:32 -- common/autotest_common.sh@10 -- # set +x 00:07:18.313 06:37:32 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:18.313 06:37:32 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:18.313 06:37:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:18.313 06:37:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.313 06:37:32 -- common/autotest_common.sh@10 -- # set +x 00:07:18.313 ************************************ 00:07:18.313 START TEST dd_offset_magic 00:07:18.313 ************************************ 00:07:18.313 06:37:32 -- common/autotest_common.sh@1114 -- # offset_magic 00:07:18.313 06:37:32 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:18.313 06:37:32 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:18.313 06:37:32 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:18.313 06:37:32 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:18.313 06:37:32 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:18.313 06:37:32 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:18.313 06:37:32 -- dd/common.sh@31 -- # xtrace_disable 00:07:18.313 06:37:32 -- common/autotest_common.sh@10 -- # set +x 00:07:18.572 [2024-12-14 06:37:32.323749] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:18.573 [2024-12-14 06:37:32.323846] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58860 ] 00:07:18.573 { 00:07:18.573 "subsystems": [ 00:07:18.573 { 00:07:18.573 "subsystem": "bdev", 00:07:18.573 "config": [ 00:07:18.573 { 00:07:18.573 "params": { 00:07:18.573 "trtype": "pcie", 00:07:18.573 "traddr": "0000:00:06.0", 00:07:18.573 "name": "Nvme0" 00:07:18.573 }, 00:07:18.573 "method": "bdev_nvme_attach_controller" 00:07:18.573 }, 00:07:18.573 { 00:07:18.573 "params": { 00:07:18.573 "trtype": "pcie", 00:07:18.573 "traddr": "0000:00:07.0", 00:07:18.573 "name": "Nvme1" 00:07:18.573 }, 00:07:18.573 "method": "bdev_nvme_attach_controller" 00:07:18.573 }, 00:07:18.573 { 00:07:18.573 "method": "bdev_wait_for_examine" 00:07:18.573 } 00:07:18.573 ] 00:07:18.573 } 00:07:18.573 ] 00:07:18.573 } 00:07:18.573 [2024-12-14 06:37:32.463798] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.573 [2024-12-14 06:37:32.526747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.832  [2024-12-14T06:37:33.082Z] Copying: 65/65 [MB] (average 970 MBps) 00:07:19.090 00:07:19.090 06:37:33 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:19.090 06:37:33 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:19.090 06:37:33 -- dd/common.sh@31 -- # xtrace_disable 00:07:19.090 06:37:33 -- common/autotest_common.sh@10 -- # set +x 00:07:19.090 [2024-12-14 06:37:33.074584] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:19.090 [2024-12-14 06:37:33.075150] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58880 ] 00:07:19.350 { 00:07:19.350 "subsystems": [ 00:07:19.350 { 00:07:19.350 "subsystem": "bdev", 00:07:19.350 "config": [ 00:07:19.350 { 00:07:19.350 "params": { 00:07:19.350 "trtype": "pcie", 00:07:19.350 "traddr": "0000:00:06.0", 00:07:19.350 "name": "Nvme0" 00:07:19.350 }, 00:07:19.350 "method": "bdev_nvme_attach_controller" 00:07:19.350 }, 00:07:19.350 { 00:07:19.350 "params": { 00:07:19.350 "trtype": "pcie", 00:07:19.350 "traddr": "0000:00:07.0", 00:07:19.350 "name": "Nvme1" 00:07:19.350 }, 00:07:19.350 "method": "bdev_nvme_attach_controller" 00:07:19.350 }, 00:07:19.350 { 00:07:19.350 "method": "bdev_wait_for_examine" 00:07:19.350 } 00:07:19.350 ] 00:07:19.350 } 00:07:19.350 ] 00:07:19.350 } 00:07:19.350 [2024-12-14 06:37:33.211663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.350 [2024-12-14 06:37:33.280107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.609  [2024-12-14T06:37:33.860Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:19.868 00:07:19.868 06:37:33 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:19.868 06:37:33 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:19.868 06:37:33 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:19.868 06:37:33 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:19.868 06:37:33 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:19.868 06:37:33 -- dd/common.sh@31 -- # xtrace_disable 00:07:19.868 06:37:33 -- common/autotest_common.sh@10 -- # set +x 00:07:19.868 [2024-12-14 06:37:33.733002] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:19.868 [2024-12-14 06:37:33.733091] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58895 ] 00:07:19.868 { 00:07:19.868 "subsystems": [ 00:07:19.868 { 00:07:19.868 "subsystem": "bdev", 00:07:19.868 "config": [ 00:07:19.868 { 00:07:19.868 "params": { 00:07:19.868 "trtype": "pcie", 00:07:19.868 "traddr": "0000:00:06.0", 00:07:19.868 "name": "Nvme0" 00:07:19.868 }, 00:07:19.868 "method": "bdev_nvme_attach_controller" 00:07:19.868 }, 00:07:19.868 { 00:07:19.868 "params": { 00:07:19.868 "trtype": "pcie", 00:07:19.868 "traddr": "0000:00:07.0", 00:07:19.868 "name": "Nvme1" 00:07:19.868 }, 00:07:19.868 "method": "bdev_nvme_attach_controller" 00:07:19.868 }, 00:07:19.868 { 00:07:19.868 "method": "bdev_wait_for_examine" 00:07:19.868 } 00:07:19.868 ] 00:07:19.868 } 00:07:19.868 ] 00:07:19.868 } 00:07:20.127 [2024-12-14 06:37:33.871362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.127 [2024-12-14 06:37:33.939860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.386  [2024-12-14T06:37:34.637Z] Copying: 65/65 [MB] (average 1065 MBps) 00:07:20.645 00:07:20.645 06:37:34 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:20.645 06:37:34 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:20.645 06:37:34 -- dd/common.sh@31 -- # xtrace_disable 00:07:20.645 06:37:34 -- common/autotest_common.sh@10 -- # set +x 00:07:20.645 [2024-12-14 06:37:34.481917] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:20.645 [2024-12-14 06:37:34.482009] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58909 ] 00:07:20.645 { 00:07:20.645 "subsystems": [ 00:07:20.645 { 00:07:20.645 "subsystem": "bdev", 00:07:20.645 "config": [ 00:07:20.645 { 00:07:20.645 "params": { 00:07:20.645 "trtype": "pcie", 00:07:20.645 "traddr": "0000:00:06.0", 00:07:20.646 "name": "Nvme0" 00:07:20.646 }, 00:07:20.646 "method": "bdev_nvme_attach_controller" 00:07:20.646 }, 00:07:20.646 { 00:07:20.646 "params": { 00:07:20.646 "trtype": "pcie", 00:07:20.646 "traddr": "0000:00:07.0", 00:07:20.646 "name": "Nvme1" 00:07:20.646 }, 00:07:20.646 "method": "bdev_nvme_attach_controller" 00:07:20.646 }, 00:07:20.646 { 00:07:20.646 "method": "bdev_wait_for_examine" 00:07:20.646 } 00:07:20.646 ] 00:07:20.646 } 00:07:20.646 ] 00:07:20.646 } 00:07:20.646 [2024-12-14 06:37:34.619738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.905 [2024-12-14 06:37:34.694442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.905  [2024-12-14T06:37:35.156Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:21.164 00:07:21.164 06:37:35 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:21.164 06:37:35 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:21.164 00:07:21.164 real 0m2.822s 00:07:21.164 user 0m2.085s 00:07:21.164 sys 0m0.548s 00:07:21.164 06:37:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:21.164 06:37:35 -- common/autotest_common.sh@10 -- # set +x 00:07:21.164 ************************************ 00:07:21.164 END TEST dd_offset_magic 00:07:21.164 ************************************ 00:07:21.164 06:37:35 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:21.164 06:37:35 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:21.164 06:37:35 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:21.164 06:37:35 -- dd/common.sh@11 -- # local nvme_ref= 00:07:21.164 06:37:35 -- dd/common.sh@12 -- # local size=4194330 00:07:21.164 06:37:35 -- dd/common.sh@14 -- # local bs=1048576 00:07:21.164 06:37:35 -- dd/common.sh@15 -- # local count=5 00:07:21.164 06:37:35 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:21.164 06:37:35 -- dd/common.sh@18 -- # gen_conf 00:07:21.164 06:37:35 -- dd/common.sh@31 -- # xtrace_disable 00:07:21.164 06:37:35 -- common/autotest_common.sh@10 -- # set +x 00:07:21.424 [2024-12-14 06:37:35.180423] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:21.424 [2024-12-14 06:37:35.180513] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58944 ] 00:07:21.424 { 00:07:21.424 "subsystems": [ 00:07:21.424 { 00:07:21.424 "subsystem": "bdev", 00:07:21.424 "config": [ 00:07:21.424 { 00:07:21.424 "params": { 00:07:21.424 "trtype": "pcie", 00:07:21.424 "traddr": "0000:00:06.0", 00:07:21.424 "name": "Nvme0" 00:07:21.424 }, 00:07:21.424 "method": "bdev_nvme_attach_controller" 00:07:21.424 }, 00:07:21.424 { 00:07:21.424 "params": { 00:07:21.424 "trtype": "pcie", 00:07:21.424 "traddr": "0000:00:07.0", 00:07:21.424 "name": "Nvme1" 00:07:21.424 }, 00:07:21.424 "method": "bdev_nvme_attach_controller" 00:07:21.424 }, 00:07:21.424 { 00:07:21.424 "method": "bdev_wait_for_examine" 00:07:21.424 } 00:07:21.424 ] 00:07:21.424 } 00:07:21.424 ] 00:07:21.424 } 00:07:21.424 [2024-12-14 06:37:35.319421] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.424 [2024-12-14 06:37:35.387861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.683  [2024-12-14T06:37:35.955Z] Copying: 5120/5120 [kB] (average 1666 MBps) 00:07:21.963 00:07:21.963 06:37:35 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:21.963 06:37:35 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:21.963 06:37:35 -- dd/common.sh@11 -- # local nvme_ref= 00:07:21.963 06:37:35 -- dd/common.sh@12 -- # local size=4194330 00:07:21.963 06:37:35 -- dd/common.sh@14 -- # local bs=1048576 00:07:21.963 06:37:35 -- dd/common.sh@15 -- # local count=5 00:07:21.963 06:37:35 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:21.963 06:37:35 -- dd/common.sh@18 -- # gen_conf 00:07:21.963 06:37:35 -- dd/common.sh@31 -- # xtrace_disable 00:07:21.963 06:37:35 -- common/autotest_common.sh@10 -- # set +x 00:07:21.963 [2024-12-14 06:37:35.834510] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:21.963 [2024-12-14 06:37:35.834613] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58959 ] 00:07:21.963 { 00:07:21.963 "subsystems": [ 00:07:21.963 { 00:07:21.963 "subsystem": "bdev", 00:07:21.963 "config": [ 00:07:21.963 { 00:07:21.963 "params": { 00:07:21.963 "trtype": "pcie", 00:07:21.963 "traddr": "0000:00:06.0", 00:07:21.963 "name": "Nvme0" 00:07:21.963 }, 00:07:21.963 "method": "bdev_nvme_attach_controller" 00:07:21.963 }, 00:07:21.963 { 00:07:21.963 "params": { 00:07:21.963 "trtype": "pcie", 00:07:21.963 "traddr": "0000:00:07.0", 00:07:21.963 "name": "Nvme1" 00:07:21.963 }, 00:07:21.963 "method": "bdev_nvme_attach_controller" 00:07:21.963 }, 00:07:21.963 { 00:07:21.963 "method": "bdev_wait_for_examine" 00:07:21.963 } 00:07:21.963 ] 00:07:21.963 } 00:07:21.963 ] 00:07:21.963 } 00:07:22.229 [2024-12-14 06:37:35.974478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.229 [2024-12-14 06:37:36.037150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.488  [2024-12-14T06:37:36.480Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:07:22.488 00:07:22.488 06:37:36 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:22.488 00:07:22.488 real 0m7.009s 00:07:22.488 user 0m5.199s 00:07:22.488 sys 0m1.327s 00:07:22.488 ************************************ 00:07:22.488 END TEST spdk_dd_bdev_to_bdev 00:07:22.488 ************************************ 00:07:22.488 06:37:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:22.488 06:37:36 -- common/autotest_common.sh@10 -- # set +x 00:07:22.748 06:37:36 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:22.748 06:37:36 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:22.748 06:37:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:22.748 06:37:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:22.748 06:37:36 -- common/autotest_common.sh@10 -- # set +x 00:07:22.748 ************************************ 00:07:22.748 START TEST spdk_dd_uring 00:07:22.748 ************************************ 00:07:22.748 06:37:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:22.748 * Looking for test storage... 00:07:22.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:22.748 06:37:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:22.748 06:37:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:22.748 06:37:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:22.748 06:37:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:22.748 06:37:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:22.748 06:37:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:22.748 06:37:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:22.748 06:37:36 -- scripts/common.sh@335 -- # IFS=.-: 00:07:22.748 06:37:36 -- scripts/common.sh@335 -- # read -ra ver1 00:07:22.748 06:37:36 -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.748 06:37:36 -- scripts/common.sh@336 -- # read -ra ver2 00:07:22.748 06:37:36 -- scripts/common.sh@337 -- # local 'op=<' 00:07:22.748 06:37:36 -- scripts/common.sh@339 -- # ver1_l=2 00:07:22.748 06:37:36 -- scripts/common.sh@340 -- # ver2_l=1 00:07:22.748 06:37:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:22.748 06:37:36 -- scripts/common.sh@343 -- # case "$op" in 00:07:22.748 06:37:36 -- scripts/common.sh@344 -- # : 1 00:07:22.748 06:37:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:22.748 06:37:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.748 06:37:36 -- scripts/common.sh@364 -- # decimal 1 00:07:22.748 06:37:36 -- scripts/common.sh@352 -- # local d=1 00:07:22.748 06:37:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.748 06:37:36 -- scripts/common.sh@354 -- # echo 1 00:07:22.748 06:37:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:22.748 06:37:36 -- scripts/common.sh@365 -- # decimal 2 00:07:22.748 06:37:36 -- scripts/common.sh@352 -- # local d=2 00:07:22.748 06:37:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.748 06:37:36 -- scripts/common.sh@354 -- # echo 2 00:07:22.748 06:37:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:22.748 06:37:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:22.748 06:37:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:22.748 06:37:36 -- scripts/common.sh@367 -- # return 0 00:07:22.748 06:37:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.748 06:37:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:22.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.748 --rc genhtml_branch_coverage=1 00:07:22.748 --rc genhtml_function_coverage=1 00:07:22.748 --rc genhtml_legend=1 00:07:22.748 --rc geninfo_all_blocks=1 00:07:22.748 --rc geninfo_unexecuted_blocks=1 00:07:22.748 00:07:22.748 ' 00:07:22.748 06:37:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:22.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.748 --rc genhtml_branch_coverage=1 00:07:22.748 --rc genhtml_function_coverage=1 00:07:22.748 --rc genhtml_legend=1 00:07:22.748 --rc geninfo_all_blocks=1 00:07:22.748 --rc geninfo_unexecuted_blocks=1 00:07:22.748 00:07:22.748 ' 00:07:22.748 06:37:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:22.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.748 --rc genhtml_branch_coverage=1 00:07:22.748 --rc genhtml_function_coverage=1 00:07:22.748 --rc genhtml_legend=1 00:07:22.748 --rc geninfo_all_blocks=1 00:07:22.748 --rc geninfo_unexecuted_blocks=1 00:07:22.748 00:07:22.748 ' 00:07:22.748 06:37:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:22.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.748 --rc genhtml_branch_coverage=1 00:07:22.748 --rc genhtml_function_coverage=1 00:07:22.748 --rc genhtml_legend=1 00:07:22.748 --rc geninfo_all_blocks=1 00:07:22.748 --rc geninfo_unexecuted_blocks=1 00:07:22.748 00:07:22.748 ' 00:07:22.748 06:37:36 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:22.748 06:37:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.748 06:37:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.748 06:37:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.748 06:37:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.749 06:37:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.749 06:37:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.749 06:37:36 -- paths/export.sh@5 -- # export PATH 00:07:22.749 06:37:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.749 06:37:36 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:22.749 06:37:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:22.749 06:37:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:22.749 06:37:36 -- common/autotest_common.sh@10 -- # set +x 00:07:22.749 ************************************ 00:07:22.749 START TEST dd_uring_copy 00:07:22.749 ************************************ 00:07:22.749 06:37:36 -- common/autotest_common.sh@1114 -- # uring_zram_copy 00:07:22.749 06:37:36 -- dd/uring.sh@15 -- # local zram_dev_id 00:07:22.749 06:37:36 -- dd/uring.sh@16 -- # local magic 00:07:22.749 06:37:36 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:22.749 06:37:36 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:22.749 06:37:36 -- dd/uring.sh@19 -- # local verify_magic 00:07:22.749 06:37:36 -- dd/uring.sh@21 -- # init_zram 00:07:22.749 06:37:36 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:07:22.749 06:37:36 -- dd/common.sh@164 -- # return 00:07:22.749 06:37:36 -- dd/uring.sh@22 -- # create_zram_dev 00:07:22.749 06:37:36 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:07:22.749 06:37:36 -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:22.749 06:37:36 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:22.749 06:37:36 -- dd/common.sh@181 -- # local id=1 00:07:22.749 06:37:36 -- dd/common.sh@182 -- # local size=512M 00:07:22.749 06:37:36 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:07:22.749 06:37:36 -- dd/common.sh@186 -- # echo 512M 00:07:22.749 06:37:36 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:22.749 06:37:36 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:22.749 06:37:36 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:22.749 06:37:36 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:22.749 06:37:36 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:22.749 06:37:36 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:22.749 06:37:36 -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:22.749 06:37:36 -- dd/common.sh@98 -- # xtrace_disable 00:07:22.749 06:37:36 -- common/autotest_common.sh@10 -- # set +x 00:07:22.749 06:37:36 -- dd/uring.sh@41 -- # magic=ent3kcnh6e6eypu0s2l7g6xalxbmnw4fwziheu58ja0cd8m8hndw4wr7d0uz4oyels9dgj0zjy3kiztt9t0l65vdaieqp2bjabe8qdux0tnaghe3veyinptutgfbu47p8fis9sagsw2ej84ta1gb702to1h7f7pymhi51mq4rvhsiitp3rmtogpk32renvehqqa1421dsm2zppbsvku70k5hx4v8qm47umiu3pllkixdsh1kbhaht3nyt8yjedxowkl9wvwt0j3zjd1symcgdboplqlt1g115gb6o6478sjm9ttuwcfvis1oc9prekyema85jimzmzgun10orhnzse72jiobfojd3cbeu6uqfar9w5h0aie6755agt5oe4c5lrr34ol7gu9t25dghhtky77cxuxxyhpoxsxre8z770h6rrm8qsm9uf22tbar01tf9tswzlah5y67k6b6wmddv3txdo9riucllok31g16ntwwiekvdpap6wmaekomtu5738g27xuh8m6rvxopmml4w46oqongxbipdah9s888h0kdy6td2qlmk3eof5808zojvwz67z8ntaunxrg770u0rf6vyyrsk302ypeyt8jmp3y94hronjc429bdcll52mkemcxfo2tvltsbq1fzr0tqsdclekk396rfb0emoh3g8ya7wyhevevyvko1pponicdje1i6pbsdmdd47ef6hnltnt2rey5wwb4okjpf27t0i0wph09zak77re82ndur8v9xuqu2512p6ti5oh77l1roskzo8h9266s22rnkmt3ghz5qlz1zwa0i9902pqno0crkl4oemj7skt32dx7ggl8qq95z3jfsadxctvn6ldp0o7wjvwngb54xgln5svdwjdzjzldfo5nz0dpar0r69gc13mjzo349fpp2k715lwdsw7889b80shpmn9amoebq9jvo0tcflyuehfn0uncd53vvxntkakuskdptphr9znj2qff1lw129bm0nswumz1vguss 00:07:22.749 06:37:36 -- dd/uring.sh@42 -- # echo ent3kcnh6e6eypu0s2l7g6xalxbmnw4fwziheu58ja0cd8m8hndw4wr7d0uz4oyels9dgj0zjy3kiztt9t0l65vdaieqp2bjabe8qdux0tnaghe3veyinptutgfbu47p8fis9sagsw2ej84ta1gb702to1h7f7pymhi51mq4rvhsiitp3rmtogpk32renvehqqa1421dsm2zppbsvku70k5hx4v8qm47umiu3pllkixdsh1kbhaht3nyt8yjedxowkl9wvwt0j3zjd1symcgdboplqlt1g115gb6o6478sjm9ttuwcfvis1oc9prekyema85jimzmzgun10orhnzse72jiobfojd3cbeu6uqfar9w5h0aie6755agt5oe4c5lrr34ol7gu9t25dghhtky77cxuxxyhpoxsxre8z770h6rrm8qsm9uf22tbar01tf9tswzlah5y67k6b6wmddv3txdo9riucllok31g16ntwwiekvdpap6wmaekomtu5738g27xuh8m6rvxopmml4w46oqongxbipdah9s888h0kdy6td2qlmk3eof5808zojvwz67z8ntaunxrg770u0rf6vyyrsk302ypeyt8jmp3y94hronjc429bdcll52mkemcxfo2tvltsbq1fzr0tqsdclekk396rfb0emoh3g8ya7wyhevevyvko1pponicdje1i6pbsdmdd47ef6hnltnt2rey5wwb4okjpf27t0i0wph09zak77re82ndur8v9xuqu2512p6ti5oh77l1roskzo8h9266s22rnkmt3ghz5qlz1zwa0i9902pqno0crkl4oemj7skt32dx7ggl8qq95z3jfsadxctvn6ldp0o7wjvwngb54xgln5svdwjdzjzldfo5nz0dpar0r69gc13mjzo349fpp2k715lwdsw7889b80shpmn9amoebq9jvo0tcflyuehfn0uncd53vvxntkakuskdptphr9znj2qff1lw129bm0nswumz1vguss 00:07:22.749 06:37:36 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:23.008 [2024-12-14 06:37:36.780065] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:23.008 [2024-12-14 06:37:36.780320] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59035 ] 00:07:23.008 [2024-12-14 06:37:36.916946] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.008 [2024-12-14 06:37:36.981279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.576  [2024-12-14T06:37:37.828Z] Copying: 511/511 [MB] (average 1514 MBps) 00:07:23.836 00:07:23.836 06:37:37 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:23.836 06:37:37 -- dd/uring.sh@54 -- # gen_conf 00:07:23.836 06:37:37 -- dd/common.sh@31 -- # xtrace_disable 00:07:23.836 06:37:37 -- common/autotest_common.sh@10 -- # set +x 00:07:24.094 [2024-12-14 06:37:37.833155] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:24.094 [2024-12-14 06:37:37.833248] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59049 ] 00:07:24.094 { 00:07:24.094 "subsystems": [ 00:07:24.094 { 00:07:24.094 "subsystem": "bdev", 00:07:24.094 "config": [ 00:07:24.094 { 00:07:24.094 "params": { 00:07:24.094 "block_size": 512, 00:07:24.094 "num_blocks": 1048576, 00:07:24.094 "name": "malloc0" 00:07:24.094 }, 00:07:24.094 "method": "bdev_malloc_create" 00:07:24.094 }, 00:07:24.094 { 00:07:24.094 "params": { 00:07:24.094 "filename": "/dev/zram1", 00:07:24.094 "name": "uring0" 00:07:24.094 }, 00:07:24.095 "method": "bdev_uring_create" 00:07:24.095 }, 00:07:24.095 { 00:07:24.095 "method": "bdev_wait_for_examine" 00:07:24.095 } 00:07:24.095 ] 00:07:24.095 } 00:07:24.095 ] 00:07:24.095 } 00:07:24.095 [2024-12-14 06:37:37.971693] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.095 [2024-12-14 06:37:38.045194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.472  [2024-12-14T06:37:40.401Z] Copying: 190/512 [MB] (190 MBps) [2024-12-14T06:37:40.970Z] Copying: 382/512 [MB] (191 MBps) [2024-12-14T06:37:41.228Z] Copying: 512/512 [MB] (average 191 MBps) 00:07:27.236 00:07:27.236 06:37:41 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:27.236 06:37:41 -- dd/uring.sh@60 -- # gen_conf 00:07:27.236 06:37:41 -- dd/common.sh@31 -- # xtrace_disable 00:07:27.236 06:37:41 -- common/autotest_common.sh@10 -- # set +x 00:07:27.495 [2024-12-14 06:37:41.260780] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:27.495 [2024-12-14 06:37:41.261056] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59096 ] 00:07:27.495 { 00:07:27.495 "subsystems": [ 00:07:27.495 { 00:07:27.495 "subsystem": "bdev", 00:07:27.495 "config": [ 00:07:27.495 { 00:07:27.495 "params": { 00:07:27.495 "block_size": 512, 00:07:27.495 "num_blocks": 1048576, 00:07:27.495 "name": "malloc0" 00:07:27.495 }, 00:07:27.495 "method": "bdev_malloc_create" 00:07:27.495 }, 00:07:27.495 { 00:07:27.495 "params": { 00:07:27.495 "filename": "/dev/zram1", 00:07:27.495 "name": "uring0" 00:07:27.495 }, 00:07:27.495 "method": "bdev_uring_create" 00:07:27.495 }, 00:07:27.495 { 00:07:27.495 "method": "bdev_wait_for_examine" 00:07:27.495 } 00:07:27.495 ] 00:07:27.495 } 00:07:27.495 ] 00:07:27.495 } 00:07:27.495 [2024-12-14 06:37:41.400524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.495 [2024-12-14 06:37:41.473514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.871  [2024-12-14T06:37:43.798Z] Copying: 122/512 [MB] (122 MBps) [2024-12-14T06:37:44.734Z] Copying: 248/512 [MB] (125 MBps) [2024-12-14T06:37:45.668Z] Copying: 398/512 [MB] (149 MBps) [2024-12-14T06:37:45.938Z] Copying: 512/512 [MB] (average 135 MBps) 00:07:31.946 00:07:31.946 06:37:45 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:31.946 06:37:45 -- dd/uring.sh@66 -- # [[ ent3kcnh6e6eypu0s2l7g6xalxbmnw4fwziheu58ja0cd8m8hndw4wr7d0uz4oyels9dgj0zjy3kiztt9t0l65vdaieqp2bjabe8qdux0tnaghe3veyinptutgfbu47p8fis9sagsw2ej84ta1gb702to1h7f7pymhi51mq4rvhsiitp3rmtogpk32renvehqqa1421dsm2zppbsvku70k5hx4v8qm47umiu3pllkixdsh1kbhaht3nyt8yjedxowkl9wvwt0j3zjd1symcgdboplqlt1g115gb6o6478sjm9ttuwcfvis1oc9prekyema85jimzmzgun10orhnzse72jiobfojd3cbeu6uqfar9w5h0aie6755agt5oe4c5lrr34ol7gu9t25dghhtky77cxuxxyhpoxsxre8z770h6rrm8qsm9uf22tbar01tf9tswzlah5y67k6b6wmddv3txdo9riucllok31g16ntwwiekvdpap6wmaekomtu5738g27xuh8m6rvxopmml4w46oqongxbipdah9s888h0kdy6td2qlmk3eof5808zojvwz67z8ntaunxrg770u0rf6vyyrsk302ypeyt8jmp3y94hronjc429bdcll52mkemcxfo2tvltsbq1fzr0tqsdclekk396rfb0emoh3g8ya7wyhevevyvko1pponicdje1i6pbsdmdd47ef6hnltnt2rey5wwb4okjpf27t0i0wph09zak77re82ndur8v9xuqu2512p6ti5oh77l1roskzo8h9266s22rnkmt3ghz5qlz1zwa0i9902pqno0crkl4oemj7skt32dx7ggl8qq95z3jfsadxctvn6ldp0o7wjvwngb54xgln5svdwjdzjzldfo5nz0dpar0r69gc13mjzo349fpp2k715lwdsw7889b80shpmn9amoebq9jvo0tcflyuehfn0uncd53vvxntkakuskdptphr9znj2qff1lw129bm0nswumz1vguss == \e\n\t\3\k\c\n\h\6\e\6\e\y\p\u\0\s\2\l\7\g\6\x\a\l\x\b\m\n\w\4\f\w\z\i\h\e\u\5\8\j\a\0\c\d\8\m\8\h\n\d\w\4\w\r\7\d\0\u\z\4\o\y\e\l\s\9\d\g\j\0\z\j\y\3\k\i\z\t\t\9\t\0\l\6\5\v\d\a\i\e\q\p\2\b\j\a\b\e\8\q\d\u\x\0\t\n\a\g\h\e\3\v\e\y\i\n\p\t\u\t\g\f\b\u\4\7\p\8\f\i\s\9\s\a\g\s\w\2\e\j\8\4\t\a\1\g\b\7\0\2\t\o\1\h\7\f\7\p\y\m\h\i\5\1\m\q\4\r\v\h\s\i\i\t\p\3\r\m\t\o\g\p\k\3\2\r\e\n\v\e\h\q\q\a\1\4\2\1\d\s\m\2\z\p\p\b\s\v\k\u\7\0\k\5\h\x\4\v\8\q\m\4\7\u\m\i\u\3\p\l\l\k\i\x\d\s\h\1\k\b\h\a\h\t\3\n\y\t\8\y\j\e\d\x\o\w\k\l\9\w\v\w\t\0\j\3\z\j\d\1\s\y\m\c\g\d\b\o\p\l\q\l\t\1\g\1\1\5\g\b\6\o\6\4\7\8\s\j\m\9\t\t\u\w\c\f\v\i\s\1\o\c\9\p\r\e\k\y\e\m\a\8\5\j\i\m\z\m\z\g\u\n\1\0\o\r\h\n\z\s\e\7\2\j\i\o\b\f\o\j\d\3\c\b\e\u\6\u\q\f\a\r\9\w\5\h\0\a\i\e\6\7\5\5\a\g\t\5\o\e\4\c\5\l\r\r\3\4\o\l\7\g\u\9\t\2\5\d\g\h\h\t\k\y\7\7\c\x\u\x\x\y\h\p\o\x\s\x\r\e\8\z\7\7\0\h\6\r\r\m\8\q\s\m\9\u\f\2\2\t\b\a\r\0\1\t\f\9\t\s\w\z\l\a\h\5\y\6\7\k\6\b\6\w\m\d\d\v\3\t\x\d\o\9\r\i\u\c\l\l\o\k\3\1\g\1\6\n\t\w\w\i\e\k\v\d\p\a\p\6\w\m\a\e\k\o\m\t\u\5\7\3\8\g\2\7\x\u\h\8\m\6\r\v\x\o\p\m\m\l\4\w\4\6\o\q\o\n\g\x\b\i\p\d\a\h\9\s\8\8\8\h\0\k\d\y\6\t\d\2\q\l\m\k\3\e\o\f\5\8\0\8\z\o\j\v\w\z\6\7\z\8\n\t\a\u\n\x\r\g\7\7\0\u\0\r\f\6\v\y\y\r\s\k\3\0\2\y\p\e\y\t\8\j\m\p\3\y\9\4\h\r\o\n\j\c\4\2\9\b\d\c\l\l\5\2\m\k\e\m\c\x\f\o\2\t\v\l\t\s\b\q\1\f\z\r\0\t\q\s\d\c\l\e\k\k\3\9\6\r\f\b\0\e\m\o\h\3\g\8\y\a\7\w\y\h\e\v\e\v\y\v\k\o\1\p\p\o\n\i\c\d\j\e\1\i\6\p\b\s\d\m\d\d\4\7\e\f\6\h\n\l\t\n\t\2\r\e\y\5\w\w\b\4\o\k\j\p\f\2\7\t\0\i\0\w\p\h\0\9\z\a\k\7\7\r\e\8\2\n\d\u\r\8\v\9\x\u\q\u\2\5\1\2\p\6\t\i\5\o\h\7\7\l\1\r\o\s\k\z\o\8\h\9\2\6\6\s\2\2\r\n\k\m\t\3\g\h\z\5\q\l\z\1\z\w\a\0\i\9\9\0\2\p\q\n\o\0\c\r\k\l\4\o\e\m\j\7\s\k\t\3\2\d\x\7\g\g\l\8\q\q\9\5\z\3\j\f\s\a\d\x\c\t\v\n\6\l\d\p\0\o\7\w\j\v\w\n\g\b\5\4\x\g\l\n\5\s\v\d\w\j\d\z\j\z\l\d\f\o\5\n\z\0\d\p\a\r\0\r\6\9\g\c\1\3\m\j\z\o\3\4\9\f\p\p\2\k\7\1\5\l\w\d\s\w\7\8\8\9\b\8\0\s\h\p\m\n\9\a\m\o\e\b\q\9\j\v\o\0\t\c\f\l\y\u\e\h\f\n\0\u\n\c\d\5\3\v\v\x\n\t\k\a\k\u\s\k\d\p\t\p\h\r\9\z\n\j\2\q\f\f\1\l\w\1\2\9\b\m\0\n\s\w\u\m\z\1\v\g\u\s\s ]] 00:07:31.946 06:37:45 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:31.947 06:37:45 -- dd/uring.sh@69 -- # [[ ent3kcnh6e6eypu0s2l7g6xalxbmnw4fwziheu58ja0cd8m8hndw4wr7d0uz4oyels9dgj0zjy3kiztt9t0l65vdaieqp2bjabe8qdux0tnaghe3veyinptutgfbu47p8fis9sagsw2ej84ta1gb702to1h7f7pymhi51mq4rvhsiitp3rmtogpk32renvehqqa1421dsm2zppbsvku70k5hx4v8qm47umiu3pllkixdsh1kbhaht3nyt8yjedxowkl9wvwt0j3zjd1symcgdboplqlt1g115gb6o6478sjm9ttuwcfvis1oc9prekyema85jimzmzgun10orhnzse72jiobfojd3cbeu6uqfar9w5h0aie6755agt5oe4c5lrr34ol7gu9t25dghhtky77cxuxxyhpoxsxre8z770h6rrm8qsm9uf22tbar01tf9tswzlah5y67k6b6wmddv3txdo9riucllok31g16ntwwiekvdpap6wmaekomtu5738g27xuh8m6rvxopmml4w46oqongxbipdah9s888h0kdy6td2qlmk3eof5808zojvwz67z8ntaunxrg770u0rf6vyyrsk302ypeyt8jmp3y94hronjc429bdcll52mkemcxfo2tvltsbq1fzr0tqsdclekk396rfb0emoh3g8ya7wyhevevyvko1pponicdje1i6pbsdmdd47ef6hnltnt2rey5wwb4okjpf27t0i0wph09zak77re82ndur8v9xuqu2512p6ti5oh77l1roskzo8h9266s22rnkmt3ghz5qlz1zwa0i9902pqno0crkl4oemj7skt32dx7ggl8qq95z3jfsadxctvn6ldp0o7wjvwngb54xgln5svdwjdzjzldfo5nz0dpar0r69gc13mjzo349fpp2k715lwdsw7889b80shpmn9amoebq9jvo0tcflyuehfn0uncd53vvxntkakuskdptphr9znj2qff1lw129bm0nswumz1vguss == \e\n\t\3\k\c\n\h\6\e\6\e\y\p\u\0\s\2\l\7\g\6\x\a\l\x\b\m\n\w\4\f\w\z\i\h\e\u\5\8\j\a\0\c\d\8\m\8\h\n\d\w\4\w\r\7\d\0\u\z\4\o\y\e\l\s\9\d\g\j\0\z\j\y\3\k\i\z\t\t\9\t\0\l\6\5\v\d\a\i\e\q\p\2\b\j\a\b\e\8\q\d\u\x\0\t\n\a\g\h\e\3\v\e\y\i\n\p\t\u\t\g\f\b\u\4\7\p\8\f\i\s\9\s\a\g\s\w\2\e\j\8\4\t\a\1\g\b\7\0\2\t\o\1\h\7\f\7\p\y\m\h\i\5\1\m\q\4\r\v\h\s\i\i\t\p\3\r\m\t\o\g\p\k\3\2\r\e\n\v\e\h\q\q\a\1\4\2\1\d\s\m\2\z\p\p\b\s\v\k\u\7\0\k\5\h\x\4\v\8\q\m\4\7\u\m\i\u\3\p\l\l\k\i\x\d\s\h\1\k\b\h\a\h\t\3\n\y\t\8\y\j\e\d\x\o\w\k\l\9\w\v\w\t\0\j\3\z\j\d\1\s\y\m\c\g\d\b\o\p\l\q\l\t\1\g\1\1\5\g\b\6\o\6\4\7\8\s\j\m\9\t\t\u\w\c\f\v\i\s\1\o\c\9\p\r\e\k\y\e\m\a\8\5\j\i\m\z\m\z\g\u\n\1\0\o\r\h\n\z\s\e\7\2\j\i\o\b\f\o\j\d\3\c\b\e\u\6\u\q\f\a\r\9\w\5\h\0\a\i\e\6\7\5\5\a\g\t\5\o\e\4\c\5\l\r\r\3\4\o\l\7\g\u\9\t\2\5\d\g\h\h\t\k\y\7\7\c\x\u\x\x\y\h\p\o\x\s\x\r\e\8\z\7\7\0\h\6\r\r\m\8\q\s\m\9\u\f\2\2\t\b\a\r\0\1\t\f\9\t\s\w\z\l\a\h\5\y\6\7\k\6\b\6\w\m\d\d\v\3\t\x\d\o\9\r\i\u\c\l\l\o\k\3\1\g\1\6\n\t\w\w\i\e\k\v\d\p\a\p\6\w\m\a\e\k\o\m\t\u\5\7\3\8\g\2\7\x\u\h\8\m\6\r\v\x\o\p\m\m\l\4\w\4\6\o\q\o\n\g\x\b\i\p\d\a\h\9\s\8\8\8\h\0\k\d\y\6\t\d\2\q\l\m\k\3\e\o\f\5\8\0\8\z\o\j\v\w\z\6\7\z\8\n\t\a\u\n\x\r\g\7\7\0\u\0\r\f\6\v\y\y\r\s\k\3\0\2\y\p\e\y\t\8\j\m\p\3\y\9\4\h\r\o\n\j\c\4\2\9\b\d\c\l\l\5\2\m\k\e\m\c\x\f\o\2\t\v\l\t\s\b\q\1\f\z\r\0\t\q\s\d\c\l\e\k\k\3\9\6\r\f\b\0\e\m\o\h\3\g\8\y\a\7\w\y\h\e\v\e\v\y\v\k\o\1\p\p\o\n\i\c\d\j\e\1\i\6\p\b\s\d\m\d\d\4\7\e\f\6\h\n\l\t\n\t\2\r\e\y\5\w\w\b\4\o\k\j\p\f\2\7\t\0\i\0\w\p\h\0\9\z\a\k\7\7\r\e\8\2\n\d\u\r\8\v\9\x\u\q\u\2\5\1\2\p\6\t\i\5\o\h\7\7\l\1\r\o\s\k\z\o\8\h\9\2\6\6\s\2\2\r\n\k\m\t\3\g\h\z\5\q\l\z\1\z\w\a\0\i\9\9\0\2\p\q\n\o\0\c\r\k\l\4\o\e\m\j\7\s\k\t\3\2\d\x\7\g\g\l\8\q\q\9\5\z\3\j\f\s\a\d\x\c\t\v\n\6\l\d\p\0\o\7\w\j\v\w\n\g\b\5\4\x\g\l\n\5\s\v\d\w\j\d\z\j\z\l\d\f\o\5\n\z\0\d\p\a\r\0\r\6\9\g\c\1\3\m\j\z\o\3\4\9\f\p\p\2\k\7\1\5\l\w\d\s\w\7\8\8\9\b\8\0\s\h\p\m\n\9\a\m\o\e\b\q\9\j\v\o\0\t\c\f\l\y\u\e\h\f\n\0\u\n\c\d\5\3\v\v\x\n\t\k\a\k\u\s\k\d\p\t\p\h\r\9\z\n\j\2\q\f\f\1\l\w\1\2\9\b\m\0\n\s\w\u\m\z\1\v\g\u\s\s ]] 00:07:31.947 06:37:45 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:32.245 06:37:46 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:32.245 06:37:46 -- dd/uring.sh@75 -- # gen_conf 00:07:32.245 06:37:46 -- dd/common.sh@31 -- # xtrace_disable 00:07:32.245 06:37:46 -- common/autotest_common.sh@10 -- # set +x 00:07:32.245 [2024-12-14 06:37:46.111517] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:32.245 [2024-12-14 06:37:46.111616] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59164 ] 00:07:32.245 { 00:07:32.245 "subsystems": [ 00:07:32.245 { 00:07:32.245 "subsystem": "bdev", 00:07:32.245 "config": [ 00:07:32.245 { 00:07:32.245 "params": { 00:07:32.245 "block_size": 512, 00:07:32.245 "num_blocks": 1048576, 00:07:32.245 "name": "malloc0" 00:07:32.245 }, 00:07:32.245 "method": "bdev_malloc_create" 00:07:32.245 }, 00:07:32.245 { 00:07:32.245 "params": { 00:07:32.245 "filename": "/dev/zram1", 00:07:32.245 "name": "uring0" 00:07:32.245 }, 00:07:32.245 "method": "bdev_uring_create" 00:07:32.245 }, 00:07:32.245 { 00:07:32.245 "method": "bdev_wait_for_examine" 00:07:32.245 } 00:07:32.245 ] 00:07:32.245 } 00:07:32.245 ] 00:07:32.245 } 00:07:32.513 [2024-12-14 06:37:46.250449] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.513 [2024-12-14 06:37:46.298562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.449  [2024-12-14T06:37:48.818Z] Copying: 166/512 [MB] (166 MBps) [2024-12-14T06:37:49.755Z] Copying: 334/512 [MB] (168 MBps) [2024-12-14T06:37:49.755Z] Copying: 490/512 [MB] (155 MBps) [2024-12-14T06:37:50.014Z] Copying: 512/512 [MB] (average 163 MBps) 00:07:36.022 00:07:36.022 06:37:49 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:36.022 06:37:49 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:36.022 06:37:49 -- dd/uring.sh@87 -- # : 00:07:36.022 06:37:49 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:36.022 06:37:49 -- dd/uring.sh@87 -- # : 00:07:36.022 06:37:49 -- dd/uring.sh@87 -- # gen_conf 00:07:36.022 06:37:49 -- dd/common.sh@31 -- # xtrace_disable 00:07:36.022 06:37:49 -- common/autotest_common.sh@10 -- # set +x 00:07:36.022 [2024-12-14 06:37:49.893249] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:36.022 [2024-12-14 06:37:49.893654] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59214 ] 00:07:36.022 { 00:07:36.022 "subsystems": [ 00:07:36.022 { 00:07:36.022 "subsystem": "bdev", 00:07:36.022 "config": [ 00:07:36.022 { 00:07:36.022 "params": { 00:07:36.022 "block_size": 512, 00:07:36.022 "num_blocks": 1048576, 00:07:36.022 "name": "malloc0" 00:07:36.022 }, 00:07:36.022 "method": "bdev_malloc_create" 00:07:36.022 }, 00:07:36.022 { 00:07:36.022 "params": { 00:07:36.022 "filename": "/dev/zram1", 00:07:36.022 "name": "uring0" 00:07:36.022 }, 00:07:36.022 "method": "bdev_uring_create" 00:07:36.022 }, 00:07:36.022 { 00:07:36.022 "params": { 00:07:36.022 "name": "uring0" 00:07:36.022 }, 00:07:36.022 "method": "bdev_uring_delete" 00:07:36.022 }, 00:07:36.022 { 00:07:36.022 "method": "bdev_wait_for_examine" 00:07:36.022 } 00:07:36.022 ] 00:07:36.022 } 00:07:36.022 ] 00:07:36.022 } 00:07:36.282 [2024-12-14 06:37:50.024620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.282 [2024-12-14 06:37:50.074767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.282  [2024-12-14T06:37:50.843Z] Copying: 0/0 [B] (average 0 Bps) 00:07:36.851 00:07:36.851 06:37:50 -- dd/uring.sh@94 -- # : 00:07:36.851 06:37:50 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:36.851 06:37:50 -- dd/uring.sh@94 -- # gen_conf 00:07:36.851 06:37:50 -- common/autotest_common.sh@650 -- # local es=0 00:07:36.851 06:37:50 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:36.851 06:37:50 -- dd/common.sh@31 -- # xtrace_disable 00:07:36.851 06:37:50 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.851 06:37:50 -- common/autotest_common.sh@10 -- # set +x 00:07:36.851 06:37:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.851 06:37:50 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.851 06:37:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.851 06:37:50 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.851 06:37:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.851 06:37:50 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:36.851 06:37:50 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:36.851 06:37:50 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:36.851 { 00:07:36.851 "subsystems": [ 00:07:36.851 { 00:07:36.851 "subsystem": "bdev", 00:07:36.851 "config": [ 00:07:36.851 { 00:07:36.851 "params": { 00:07:36.851 "block_size": 512, 00:07:36.851 "num_blocks": 1048576, 00:07:36.851 "name": "malloc0" 00:07:36.851 }, 00:07:36.851 "method": "bdev_malloc_create" 00:07:36.851 }, 00:07:36.851 { 00:07:36.851 "params": { 00:07:36.851 "filename": "/dev/zram1", 00:07:36.851 "name": "uring0" 00:07:36.851 }, 00:07:36.851 "method": "bdev_uring_create" 00:07:36.851 }, 00:07:36.851 { 00:07:36.851 "params": { 00:07:36.851 "name": "uring0" 00:07:36.851 }, 00:07:36.851 "method": "bdev_uring_delete" 00:07:36.851 }, 00:07:36.851 { 00:07:36.851 "method": "bdev_wait_for_examine" 00:07:36.851 } 00:07:36.851 ] 00:07:36.851 } 00:07:36.851 ] 00:07:36.851 } 00:07:36.851 [2024-12-14 06:37:50.610819] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:36.851 [2024-12-14 06:37:50.611139] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59239 ] 00:07:36.851 [2024-12-14 06:37:50.744217] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.851 [2024-12-14 06:37:50.798390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.110 [2024-12-14 06:37:50.945133] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:37.110 [2024-12-14 06:37:50.945183] spdk_dd.c: 932:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:37.110 [2024-12-14 06:37:50.945209] spdk_dd.c:1074:dd_run: *ERROR*: uring0: No such device 00:07:37.110 [2024-12-14 06:37:50.945217] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:37.369 [2024-12-14 06:37:51.112572] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:37.369 06:37:51 -- common/autotest_common.sh@653 -- # es=237 00:07:37.369 06:37:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:37.369 06:37:51 -- common/autotest_common.sh@662 -- # es=109 00:07:37.369 06:37:51 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:37.369 06:37:51 -- common/autotest_common.sh@670 -- # es=1 00:07:37.369 06:37:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:37.369 06:37:51 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:37.369 06:37:51 -- dd/common.sh@172 -- # local id=1 00:07:37.369 06:37:51 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:07:37.369 06:37:51 -- dd/common.sh@176 -- # echo 1 00:07:37.369 06:37:51 -- dd/common.sh@177 -- # echo 1 00:07:37.369 06:37:51 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:37.628 00:07:37.628 real 0m14.839s 00:07:37.628 user 0m8.304s 00:07:37.628 sys 0m5.841s 00:07:37.628 06:37:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:37.628 ************************************ 00:07:37.628 END TEST dd_uring_copy 00:07:37.628 ************************************ 00:07:37.628 06:37:51 -- common/autotest_common.sh@10 -- # set +x 00:07:37.628 ************************************ 00:07:37.628 END TEST spdk_dd_uring 00:07:37.628 ************************************ 00:07:37.628 00:07:37.628 real 0m15.083s 00:07:37.628 user 0m8.434s 00:07:37.628 sys 0m5.955s 00:07:37.628 06:37:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:37.628 06:37:51 -- common/autotest_common.sh@10 -- # set +x 00:07:37.886 06:37:51 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:37.886 06:37:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:37.886 06:37:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.887 06:37:51 -- common/autotest_common.sh@10 -- # set +x 00:07:37.887 ************************************ 00:07:37.887 START TEST spdk_dd_sparse 00:07:37.887 ************************************ 00:07:37.887 06:37:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:37.887 * Looking for test storage... 00:07:37.887 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:37.887 06:37:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:37.887 06:37:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:37.887 06:37:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:37.887 06:37:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:37.887 06:37:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:37.887 06:37:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:37.887 06:37:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:37.887 06:37:51 -- scripts/common.sh@335 -- # IFS=.-: 00:07:37.887 06:37:51 -- scripts/common.sh@335 -- # read -ra ver1 00:07:37.887 06:37:51 -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.887 06:37:51 -- scripts/common.sh@336 -- # read -ra ver2 00:07:37.887 06:37:51 -- scripts/common.sh@337 -- # local 'op=<' 00:07:37.887 06:37:51 -- scripts/common.sh@339 -- # ver1_l=2 00:07:37.887 06:37:51 -- scripts/common.sh@340 -- # ver2_l=1 00:07:37.887 06:37:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:37.887 06:37:51 -- scripts/common.sh@343 -- # case "$op" in 00:07:37.887 06:37:51 -- scripts/common.sh@344 -- # : 1 00:07:37.887 06:37:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:37.887 06:37:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.887 06:37:51 -- scripts/common.sh@364 -- # decimal 1 00:07:37.887 06:37:51 -- scripts/common.sh@352 -- # local d=1 00:07:37.887 06:37:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.887 06:37:51 -- scripts/common.sh@354 -- # echo 1 00:07:37.887 06:37:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:37.887 06:37:51 -- scripts/common.sh@365 -- # decimal 2 00:07:37.887 06:37:51 -- scripts/common.sh@352 -- # local d=2 00:07:37.887 06:37:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.887 06:37:51 -- scripts/common.sh@354 -- # echo 2 00:07:37.887 06:37:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:37.887 06:37:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:37.887 06:37:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:37.887 06:37:51 -- scripts/common.sh@367 -- # return 0 00:07:37.887 06:37:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.887 06:37:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:37.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.887 --rc genhtml_branch_coverage=1 00:07:37.887 --rc genhtml_function_coverage=1 00:07:37.887 --rc genhtml_legend=1 00:07:37.887 --rc geninfo_all_blocks=1 00:07:37.887 --rc geninfo_unexecuted_blocks=1 00:07:37.887 00:07:37.887 ' 00:07:37.887 06:37:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:37.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.887 --rc genhtml_branch_coverage=1 00:07:37.887 --rc genhtml_function_coverage=1 00:07:37.887 --rc genhtml_legend=1 00:07:37.887 --rc geninfo_all_blocks=1 00:07:37.887 --rc geninfo_unexecuted_blocks=1 00:07:37.887 00:07:37.887 ' 00:07:37.887 06:37:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:37.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.887 --rc genhtml_branch_coverage=1 00:07:37.887 --rc genhtml_function_coverage=1 00:07:37.887 --rc genhtml_legend=1 00:07:37.887 --rc geninfo_all_blocks=1 00:07:37.887 --rc geninfo_unexecuted_blocks=1 00:07:37.887 00:07:37.887 ' 00:07:37.887 06:37:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:37.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.887 --rc genhtml_branch_coverage=1 00:07:37.887 --rc genhtml_function_coverage=1 00:07:37.887 --rc genhtml_legend=1 00:07:37.887 --rc geninfo_all_blocks=1 00:07:37.887 --rc geninfo_unexecuted_blocks=1 00:07:37.887 00:07:37.887 ' 00:07:37.887 06:37:51 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:37.887 06:37:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.887 06:37:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.887 06:37:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.887 06:37:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.887 06:37:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.887 06:37:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.887 06:37:51 -- paths/export.sh@5 -- # export PATH 00:07:37.887 06:37:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.887 06:37:51 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:37.887 06:37:51 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:37.887 06:37:51 -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:37.887 06:37:51 -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:37.887 06:37:51 -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:37.887 06:37:51 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:37.887 06:37:51 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:37.887 06:37:51 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:37.887 06:37:51 -- dd/sparse.sh@118 -- # prepare 00:07:37.887 06:37:51 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:37.887 06:37:51 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:37.887 1+0 records in 00:07:37.887 1+0 records out 00:07:37.887 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0067536 s, 621 MB/s 00:07:37.887 06:37:51 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:37.887 1+0 records in 00:07:37.887 1+0 records out 00:07:37.887 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00590645 s, 710 MB/s 00:07:37.887 06:37:51 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:37.887 1+0 records in 00:07:37.887 1+0 records out 00:07:37.887 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00469722 s, 893 MB/s 00:07:37.887 06:37:51 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:37.887 06:37:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:37.887 06:37:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.887 06:37:51 -- common/autotest_common.sh@10 -- # set +x 00:07:38.146 ************************************ 00:07:38.146 START TEST dd_sparse_file_to_file 00:07:38.146 ************************************ 00:07:38.146 06:37:51 -- common/autotest_common.sh@1114 -- # file_to_file 00:07:38.146 06:37:51 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:38.146 06:37:51 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:38.146 06:37:51 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:38.146 06:37:51 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:38.146 06:37:51 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:38.146 06:37:51 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:38.146 06:37:51 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:38.146 06:37:51 -- dd/sparse.sh@41 -- # gen_conf 00:07:38.146 06:37:51 -- dd/common.sh@31 -- # xtrace_disable 00:07:38.146 06:37:51 -- common/autotest_common.sh@10 -- # set +x 00:07:38.146 [2024-12-14 06:37:51.929372] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:38.146 [2024-12-14 06:37:51.929650] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59336 ] 00:07:38.146 { 00:07:38.146 "subsystems": [ 00:07:38.146 { 00:07:38.146 "subsystem": "bdev", 00:07:38.146 "config": [ 00:07:38.146 { 00:07:38.146 "params": { 00:07:38.146 "block_size": 4096, 00:07:38.146 "filename": "dd_sparse_aio_disk", 00:07:38.146 "name": "dd_aio" 00:07:38.146 }, 00:07:38.146 "method": "bdev_aio_create" 00:07:38.146 }, 00:07:38.146 { 00:07:38.146 "params": { 00:07:38.146 "lvs_name": "dd_lvstore", 00:07:38.146 "bdev_name": "dd_aio" 00:07:38.146 }, 00:07:38.146 "method": "bdev_lvol_create_lvstore" 00:07:38.146 }, 00:07:38.146 { 00:07:38.146 "method": "bdev_wait_for_examine" 00:07:38.146 } 00:07:38.146 ] 00:07:38.146 } 00:07:38.146 ] 00:07:38.146 } 00:07:38.146 [2024-12-14 06:37:52.066993] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.146 [2024-12-14 06:37:52.116236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.405  [2024-12-14T06:37:52.656Z] Copying: 12/36 [MB] (average 1714 MBps) 00:07:38.664 00:07:38.664 06:37:52 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:38.664 06:37:52 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:38.664 06:37:52 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:38.664 06:37:52 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:38.664 06:37:52 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:38.664 06:37:52 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:38.664 06:37:52 -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:38.664 06:37:52 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:38.664 06:37:52 -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:38.664 06:37:52 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:38.664 00:07:38.664 real 0m0.566s 00:07:38.664 user 0m0.343s 00:07:38.664 sys 0m0.132s 00:07:38.664 ************************************ 00:07:38.664 END TEST dd_sparse_file_to_file 00:07:38.664 ************************************ 00:07:38.664 06:37:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:38.664 06:37:52 -- common/autotest_common.sh@10 -- # set +x 00:07:38.664 06:37:52 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:38.664 06:37:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:38.664 06:37:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.664 06:37:52 -- common/autotest_common.sh@10 -- # set +x 00:07:38.664 ************************************ 00:07:38.664 START TEST dd_sparse_file_to_bdev 00:07:38.664 ************************************ 00:07:38.664 06:37:52 -- common/autotest_common.sh@1114 -- # file_to_bdev 00:07:38.664 06:37:52 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:38.664 06:37:52 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:38.664 06:37:52 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:07:38.664 06:37:52 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:38.664 06:37:52 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:38.664 06:37:52 -- dd/sparse.sh@73 -- # gen_conf 00:07:38.664 06:37:52 -- dd/common.sh@31 -- # xtrace_disable 00:07:38.664 06:37:52 -- common/autotest_common.sh@10 -- # set +x 00:07:38.664 [2024-12-14 06:37:52.547794] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:38.664 [2024-12-14 06:37:52.548102] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59378 ] 00:07:38.664 { 00:07:38.664 "subsystems": [ 00:07:38.664 { 00:07:38.664 "subsystem": "bdev", 00:07:38.664 "config": [ 00:07:38.664 { 00:07:38.664 "params": { 00:07:38.664 "block_size": 4096, 00:07:38.664 "filename": "dd_sparse_aio_disk", 00:07:38.664 "name": "dd_aio" 00:07:38.664 }, 00:07:38.664 "method": "bdev_aio_create" 00:07:38.664 }, 00:07:38.664 { 00:07:38.664 "params": { 00:07:38.664 "lvs_name": "dd_lvstore", 00:07:38.664 "lvol_name": "dd_lvol", 00:07:38.664 "size": 37748736, 00:07:38.664 "thin_provision": true 00:07:38.664 }, 00:07:38.664 "method": "bdev_lvol_create" 00:07:38.664 }, 00:07:38.664 { 00:07:38.664 "method": "bdev_wait_for_examine" 00:07:38.664 } 00:07:38.664 ] 00:07:38.664 } 00:07:38.664 ] 00:07:38.664 } 00:07:38.923 [2024-12-14 06:37:52.687308] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.923 [2024-12-14 06:37:52.738847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.923 [2024-12-14 06:37:52.794096] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:07:38.923  [2024-12-14T06:37:52.915Z] Copying: 12/36 [MB] (average 324 MBps)[2024-12-14 06:37:52.847046] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:07:39.182 00:07:39.182 00:07:39.182 00:07:39.182 real 0m0.563s 00:07:39.182 user 0m0.359s 00:07:39.182 sys 0m0.131s 00:07:39.182 ************************************ 00:07:39.182 END TEST dd_sparse_file_to_bdev 00:07:39.182 ************************************ 00:07:39.182 06:37:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:39.182 06:37:53 -- common/autotest_common.sh@10 -- # set +x 00:07:39.182 06:37:53 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:39.182 06:37:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:39.182 06:37:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:39.182 06:37:53 -- common/autotest_common.sh@10 -- # set +x 00:07:39.182 ************************************ 00:07:39.182 START TEST dd_sparse_bdev_to_file 00:07:39.182 ************************************ 00:07:39.182 06:37:53 -- common/autotest_common.sh@1114 -- # bdev_to_file 00:07:39.182 06:37:53 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:39.182 06:37:53 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:39.182 06:37:53 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:39.182 06:37:53 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:39.182 06:37:53 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:39.182 06:37:53 -- dd/sparse.sh@91 -- # gen_conf 00:07:39.182 06:37:53 -- dd/common.sh@31 -- # xtrace_disable 00:07:39.182 06:37:53 -- common/autotest_common.sh@10 -- # set +x 00:07:39.182 [2024-12-14 06:37:53.155363] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:39.182 [2024-12-14 06:37:53.155437] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59415 ] 00:07:39.441 { 00:07:39.441 "subsystems": [ 00:07:39.441 { 00:07:39.441 "subsystem": "bdev", 00:07:39.441 "config": [ 00:07:39.441 { 00:07:39.441 "params": { 00:07:39.441 "block_size": 4096, 00:07:39.441 "filename": "dd_sparse_aio_disk", 00:07:39.441 "name": "dd_aio" 00:07:39.441 }, 00:07:39.441 "method": "bdev_aio_create" 00:07:39.441 }, 00:07:39.441 { 00:07:39.441 "method": "bdev_wait_for_examine" 00:07:39.441 } 00:07:39.441 ] 00:07:39.441 } 00:07:39.441 ] 00:07:39.441 } 00:07:39.441 [2024-12-14 06:37:53.286007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.441 [2024-12-14 06:37:53.334054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.441  [2024-12-14T06:37:53.693Z] Copying: 12/36 [MB] (average 1500 MBps) 00:07:39.701 00:07:39.701 06:37:53 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:39.701 06:37:53 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:39.701 06:37:53 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:39.701 06:37:53 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:39.701 06:37:53 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:39.701 06:37:53 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:39.701 06:37:53 -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:39.701 06:37:53 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:39.701 06:37:53 -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:39.701 06:37:53 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:39.701 00:07:39.701 real 0m0.525s 00:07:39.701 user 0m0.336s 00:07:39.701 sys 0m0.115s 00:07:39.701 06:37:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:39.701 ************************************ 00:07:39.701 END TEST dd_sparse_bdev_to_file 00:07:39.701 ************************************ 00:07:39.701 06:37:53 -- common/autotest_common.sh@10 -- # set +x 00:07:39.701 06:37:53 -- dd/sparse.sh@1 -- # cleanup 00:07:39.701 06:37:53 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:39.701 06:37:53 -- dd/sparse.sh@12 -- # rm file_zero1 00:07:39.960 06:37:53 -- dd/sparse.sh@13 -- # rm file_zero2 00:07:39.960 06:37:53 -- dd/sparse.sh@14 -- # rm file_zero3 00:07:39.960 ************************************ 00:07:39.960 END TEST spdk_dd_sparse 00:07:39.960 ************************************ 00:07:39.960 00:07:39.960 real 0m2.064s 00:07:39.960 user 0m1.211s 00:07:39.960 sys 0m0.597s 00:07:39.960 06:37:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:39.960 06:37:53 -- common/autotest_common.sh@10 -- # set +x 00:07:39.960 06:37:53 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:39.960 06:37:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:39.960 06:37:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:39.960 06:37:53 -- common/autotest_common.sh@10 -- # set +x 00:07:39.960 ************************************ 00:07:39.960 START TEST spdk_dd_negative 00:07:39.960 ************************************ 00:07:39.960 06:37:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:39.960 * Looking for test storage... 00:07:39.960 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:39.960 06:37:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:39.960 06:37:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:39.960 06:37:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:39.960 06:37:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:39.960 06:37:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:39.960 06:37:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:39.960 06:37:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:39.960 06:37:53 -- scripts/common.sh@335 -- # IFS=.-: 00:07:39.960 06:37:53 -- scripts/common.sh@335 -- # read -ra ver1 00:07:39.960 06:37:53 -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.960 06:37:53 -- scripts/common.sh@336 -- # read -ra ver2 00:07:39.960 06:37:53 -- scripts/common.sh@337 -- # local 'op=<' 00:07:39.960 06:37:53 -- scripts/common.sh@339 -- # ver1_l=2 00:07:39.960 06:37:53 -- scripts/common.sh@340 -- # ver2_l=1 00:07:39.960 06:37:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:39.961 06:37:53 -- scripts/common.sh@343 -- # case "$op" in 00:07:39.961 06:37:53 -- scripts/common.sh@344 -- # : 1 00:07:39.961 06:37:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:39.961 06:37:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.961 06:37:53 -- scripts/common.sh@364 -- # decimal 1 00:07:39.961 06:37:53 -- scripts/common.sh@352 -- # local d=1 00:07:39.961 06:37:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.961 06:37:53 -- scripts/common.sh@354 -- # echo 1 00:07:39.961 06:37:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:39.961 06:37:53 -- scripts/common.sh@365 -- # decimal 2 00:07:39.961 06:37:53 -- scripts/common.sh@352 -- # local d=2 00:07:39.961 06:37:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.961 06:37:53 -- scripts/common.sh@354 -- # echo 2 00:07:39.961 06:37:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:39.961 06:37:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:39.961 06:37:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:39.961 06:37:53 -- scripts/common.sh@367 -- # return 0 00:07:39.961 06:37:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.961 06:37:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:39.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.961 --rc genhtml_branch_coverage=1 00:07:39.961 --rc genhtml_function_coverage=1 00:07:39.961 --rc genhtml_legend=1 00:07:39.961 --rc geninfo_all_blocks=1 00:07:39.961 --rc geninfo_unexecuted_blocks=1 00:07:39.961 00:07:39.961 ' 00:07:39.961 06:37:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:39.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.961 --rc genhtml_branch_coverage=1 00:07:39.961 --rc genhtml_function_coverage=1 00:07:39.961 --rc genhtml_legend=1 00:07:39.961 --rc geninfo_all_blocks=1 00:07:39.961 --rc geninfo_unexecuted_blocks=1 00:07:39.961 00:07:39.961 ' 00:07:39.961 06:37:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:39.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.961 --rc genhtml_branch_coverage=1 00:07:39.961 --rc genhtml_function_coverage=1 00:07:39.961 --rc genhtml_legend=1 00:07:39.961 --rc geninfo_all_blocks=1 00:07:39.961 --rc geninfo_unexecuted_blocks=1 00:07:39.961 00:07:39.961 ' 00:07:39.961 06:37:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:39.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.961 --rc genhtml_branch_coverage=1 00:07:39.961 --rc genhtml_function_coverage=1 00:07:39.961 --rc genhtml_legend=1 00:07:39.961 --rc geninfo_all_blocks=1 00:07:39.961 --rc geninfo_unexecuted_blocks=1 00:07:39.961 00:07:39.961 ' 00:07:39.961 06:37:53 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:39.961 06:37:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.961 06:37:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.961 06:37:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.961 06:37:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.961 06:37:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.961 06:37:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.961 06:37:53 -- paths/export.sh@5 -- # export PATH 00:07:39.961 06:37:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.961 06:37:53 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:39.961 06:37:53 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:39.961 06:37:53 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:39.961 06:37:53 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:39.961 06:37:53 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:07:39.961 06:37:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:39.961 06:37:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:39.961 06:37:53 -- common/autotest_common.sh@10 -- # set +x 00:07:40.220 ************************************ 00:07:40.220 START TEST dd_invalid_arguments 00:07:40.220 ************************************ 00:07:40.220 06:37:53 -- common/autotest_common.sh@1114 -- # invalid_arguments 00:07:40.220 06:37:53 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:40.220 06:37:53 -- common/autotest_common.sh@650 -- # local es=0 00:07:40.220 06:37:53 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:40.220 06:37:53 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.220 06:37:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.220 06:37:53 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.220 06:37:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.220 06:37:53 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.220 06:37:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.220 06:37:53 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.220 06:37:53 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:40.220 06:37:53 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:40.220 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:40.220 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:40.220 options: 00:07:40.220 -c, --config JSON config file (default none) 00:07:40.220 --json JSON config file (default none) 00:07:40.220 --json-ignore-init-errors 00:07:40.220 don't exit on invalid config entry 00:07:40.220 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:40.220 -g, --single-file-segments 00:07:40.220 force creating just one hugetlbfs file 00:07:40.220 -h, --help show this usage 00:07:40.220 -i, --shm-id shared memory ID (optional) 00:07:40.220 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:07:40.220 --lcores lcore to CPU mapping list. The list is in the format: 00:07:40.220 [<,lcores[@CPUs]>...] 00:07:40.220 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:40.220 Within the group, '-' is used for range separator, 00:07:40.220 ',' is used for single number separator. 00:07:40.220 '( )' can be omitted for single element group, 00:07:40.220 '@' can be omitted if cpus and lcores have the same value 00:07:40.220 -n, --mem-channels channel number of memory channels used for DPDK 00:07:40.220 -p, --main-core main (primary) core for DPDK 00:07:40.220 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:40.220 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:40.220 --disable-cpumask-locks Disable CPU core lock files. 00:07:40.220 --silence-noticelog disable notice level logging to stderr 00:07:40.220 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:40.220 -u, --no-pci disable PCI access 00:07:40.220 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:40.220 --max-delay maximum reactor delay (in microseconds) 00:07:40.220 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:40.220 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:40.220 -R, --huge-unlink unlink huge files after initialization 00:07:40.220 -v, --version print SPDK version 00:07:40.220 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:40.220 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:40.220 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:40.220 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:07:40.220 Tracepoints vary in size and can use more than one trace entry. 00:07:40.220 --rpcs-allowed comma-separated list of permitted RPCS 00:07:40.220 --env-context Opaque context for use of the env implementation 00:07:40.220 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:40.220 --no-huge run without using hugepages 00:07:40.220 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, scsi, sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, vfu, vfu_virtio, vfu_virtio_blk, vfu_virtio_io, vfu_virtio_scsi, vfu_virtio_scsi_data, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:07:40.220 -e, --tpoint-group [:] 00:07:40.220 group_name - tracepoint group name for spdk trace buffers (scsi, bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:07:40.220 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be[2024-12-14 06:37:54.029895] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:07:40.220 enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:07:40.220 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:07:40.220 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:07:40.220 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:07:40.220 [--------- DD Options ---------] 00:07:40.221 --if Input file. Must specify either --if or --ib. 00:07:40.221 --ib Input bdev. Must specifier either --if or --ib 00:07:40.221 --of Output file. Must specify either --of or --ob. 00:07:40.221 --ob Output bdev. Must specify either --of or --ob. 00:07:40.221 --iflag Input file flags. 00:07:40.221 --oflag Output file flags. 00:07:40.221 --bs I/O unit size (default: 4096) 00:07:40.221 --qd Queue depth (default: 2) 00:07:40.221 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:40.221 --skip Skip this many I/O units at start of input. (default: 0) 00:07:40.221 --seek Skip this many I/O units at start of output. (default: 0) 00:07:40.221 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:40.221 --sparse Enable hole skipping in input target 00:07:40.221 Available iflag and oflag values: 00:07:40.221 append - append mode 00:07:40.221 direct - use direct I/O for data 00:07:40.221 directory - fail unless a directory 00:07:40.221 dsync - use synchronized I/O for data 00:07:40.221 noatime - do not update access time 00:07:40.221 noctty - do not assign controlling terminal from file 00:07:40.221 nofollow - do not follow symlinks 00:07:40.221 nonblock - use non-blocking I/O 00:07:40.221 sync - use synchronized I/O for data and metadata 00:07:40.221 ************************************ 00:07:40.221 END TEST dd_invalid_arguments 00:07:40.221 ************************************ 00:07:40.221 06:37:54 -- common/autotest_common.sh@653 -- # es=2 00:07:40.221 06:37:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:40.221 06:37:54 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:40.221 06:37:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:40.221 00:07:40.221 real 0m0.099s 00:07:40.221 user 0m0.064s 00:07:40.221 sys 0m0.033s 00:07:40.221 06:37:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:40.221 06:37:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.221 06:37:54 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:07:40.221 06:37:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:40.221 06:37:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.221 06:37:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.221 ************************************ 00:07:40.221 START TEST dd_double_input 00:07:40.221 ************************************ 00:07:40.221 06:37:54 -- common/autotest_common.sh@1114 -- # double_input 00:07:40.221 06:37:54 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:40.221 06:37:54 -- common/autotest_common.sh@650 -- # local es=0 00:07:40.221 06:37:54 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:40.221 06:37:54 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.221 06:37:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.221 06:37:54 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.221 06:37:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.221 06:37:54 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.221 06:37:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.221 06:37:54 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.221 06:37:54 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:40.221 06:37:54 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:40.221 [2024-12-14 06:37:54.164307] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:40.221 06:37:54 -- common/autotest_common.sh@653 -- # es=22 00:07:40.221 06:37:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:40.221 06:37:54 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:40.221 ************************************ 00:07:40.221 END TEST dd_double_input 00:07:40.221 ************************************ 00:07:40.221 06:37:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:40.221 00:07:40.221 real 0m0.074s 00:07:40.221 user 0m0.051s 00:07:40.221 sys 0m0.022s 00:07:40.221 06:37:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:40.221 06:37:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.480 06:37:54 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:07:40.480 06:37:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:40.480 06:37:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.480 06:37:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.480 ************************************ 00:07:40.480 START TEST dd_double_output 00:07:40.480 ************************************ 00:07:40.480 06:37:54 -- common/autotest_common.sh@1114 -- # double_output 00:07:40.480 06:37:54 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:40.480 06:37:54 -- common/autotest_common.sh@650 -- # local es=0 00:07:40.480 06:37:54 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:40.480 06:37:54 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.480 06:37:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.480 06:37:54 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.480 06:37:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.480 06:37:54 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.480 06:37:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.480 06:37:54 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.480 06:37:54 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:40.480 06:37:54 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:40.480 [2024-12-14 06:37:54.289398] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:40.480 06:37:54 -- common/autotest_common.sh@653 -- # es=22 00:07:40.480 06:37:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:40.480 06:37:54 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:40.480 06:37:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:40.480 00:07:40.480 real 0m0.070s 00:07:40.480 user 0m0.042s 00:07:40.480 sys 0m0.027s 00:07:40.480 ************************************ 00:07:40.480 END TEST dd_double_output 00:07:40.480 ************************************ 00:07:40.480 06:37:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:40.480 06:37:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.480 06:37:54 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:07:40.480 06:37:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:40.480 06:37:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.480 06:37:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.480 ************************************ 00:07:40.480 START TEST dd_no_input 00:07:40.480 ************************************ 00:07:40.480 06:37:54 -- common/autotest_common.sh@1114 -- # no_input 00:07:40.480 06:37:54 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:40.480 06:37:54 -- common/autotest_common.sh@650 -- # local es=0 00:07:40.480 06:37:54 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:40.480 06:37:54 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.480 06:37:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.480 06:37:54 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.480 06:37:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.480 06:37:54 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.480 06:37:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.480 06:37:54 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.480 06:37:54 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:40.480 06:37:54 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:40.480 [2024-12-14 06:37:54.442155] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:07:40.480 06:37:54 -- common/autotest_common.sh@653 -- # es=22 00:07:40.480 06:37:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:40.480 06:37:54 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:40.480 06:37:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:40.480 00:07:40.480 real 0m0.104s 00:07:40.480 user 0m0.071s 00:07:40.480 sys 0m0.030s 00:07:40.480 06:37:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:40.480 06:37:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.480 ************************************ 00:07:40.480 END TEST dd_no_input 00:07:40.480 ************************************ 00:07:40.739 06:37:54 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:07:40.739 06:37:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:40.739 06:37:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.739 06:37:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.739 ************************************ 00:07:40.739 START TEST dd_no_output 00:07:40.739 ************************************ 00:07:40.739 06:37:54 -- common/autotest_common.sh@1114 -- # no_output 00:07:40.739 06:37:54 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:40.739 06:37:54 -- common/autotest_common.sh@650 -- # local es=0 00:07:40.739 06:37:54 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:40.739 06:37:54 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.739 06:37:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.739 06:37:54 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.739 06:37:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.739 06:37:54 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.739 06:37:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.739 06:37:54 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.739 06:37:54 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:40.739 06:37:54 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:40.739 [2024-12-14 06:37:54.573961] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:07:40.739 06:37:54 -- common/autotest_common.sh@653 -- # es=22 00:07:40.739 06:37:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:40.739 06:37:54 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:40.739 06:37:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:40.739 00:07:40.739 real 0m0.076s 00:07:40.739 user 0m0.049s 00:07:40.739 sys 0m0.025s 00:07:40.739 06:37:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:40.739 ************************************ 00:07:40.739 END TEST dd_no_output 00:07:40.739 ************************************ 00:07:40.739 06:37:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.739 06:37:54 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:40.739 06:37:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:40.739 06:37:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.739 06:37:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.739 ************************************ 00:07:40.739 START TEST dd_wrong_blocksize 00:07:40.739 ************************************ 00:07:40.739 06:37:54 -- common/autotest_common.sh@1114 -- # wrong_blocksize 00:07:40.739 06:37:54 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:40.739 06:37:54 -- common/autotest_common.sh@650 -- # local es=0 00:07:40.739 06:37:54 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:40.739 06:37:54 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.739 06:37:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.739 06:37:54 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.739 06:37:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.739 06:37:54 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.739 06:37:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.739 06:37:54 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.739 06:37:54 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:40.740 06:37:54 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:40.740 [2024-12-14 06:37:54.705638] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:07:40.740 06:37:54 -- common/autotest_common.sh@653 -- # es=22 00:07:40.740 06:37:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:40.740 06:37:54 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:40.740 06:37:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:40.740 00:07:40.740 real 0m0.076s 00:07:40.740 user 0m0.047s 00:07:40.740 sys 0m0.028s 00:07:40.740 06:37:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:40.740 06:37:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.740 ************************************ 00:07:40.740 END TEST dd_wrong_blocksize 00:07:40.740 ************************************ 00:07:40.999 06:37:54 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:40.999 06:37:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:40.999 06:37:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.999 06:37:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.999 ************************************ 00:07:40.999 START TEST dd_smaller_blocksize 00:07:40.999 ************************************ 00:07:40.999 06:37:54 -- common/autotest_common.sh@1114 -- # smaller_blocksize 00:07:40.999 06:37:54 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:40.999 06:37:54 -- common/autotest_common.sh@650 -- # local es=0 00:07:40.999 06:37:54 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:40.999 06:37:54 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.999 06:37:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.999 06:37:54 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.999 06:37:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.999 06:37:54 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.999 06:37:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.999 06:37:54 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.999 06:37:54 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:40.999 06:37:54 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:40.999 [2024-12-14 06:37:54.836723] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:40.999 [2024-12-14 06:37:54.836819] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59633 ] 00:07:40.999 [2024-12-14 06:37:54.976818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.258 [2024-12-14 06:37:55.045528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.516 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:41.516 [2024-12-14 06:37:55.373917] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:41.516 [2024-12-14 06:37:55.373997] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.516 [2024-12-14 06:37:55.445258] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:41.775 06:37:55 -- common/autotest_common.sh@653 -- # es=244 00:07:41.775 06:37:55 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:41.775 06:37:55 -- common/autotest_common.sh@662 -- # es=116 00:07:41.775 ************************************ 00:07:41.775 END TEST dd_smaller_blocksize 00:07:41.775 ************************************ 00:07:41.775 06:37:55 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:41.775 06:37:55 -- common/autotest_common.sh@670 -- # es=1 00:07:41.775 06:37:55 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:41.775 00:07:41.775 real 0m0.777s 00:07:41.775 user 0m0.362s 00:07:41.775 sys 0m0.309s 00:07:41.775 06:37:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:41.775 06:37:55 -- common/autotest_common.sh@10 -- # set +x 00:07:41.775 06:37:55 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:07:41.775 06:37:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:41.775 06:37:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:41.775 06:37:55 -- common/autotest_common.sh@10 -- # set +x 00:07:41.775 ************************************ 00:07:41.775 START TEST dd_invalid_count 00:07:41.775 ************************************ 00:07:41.775 06:37:55 -- common/autotest_common.sh@1114 -- # invalid_count 00:07:41.775 06:37:55 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:41.775 06:37:55 -- common/autotest_common.sh@650 -- # local es=0 00:07:41.775 06:37:55 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:41.775 06:37:55 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.775 06:37:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.775 06:37:55 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.775 06:37:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.775 06:37:55 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.775 06:37:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.775 06:37:55 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.775 06:37:55 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:41.775 06:37:55 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:41.775 [2024-12-14 06:37:55.658935] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:07:41.775 ************************************ 00:07:41.775 END TEST dd_invalid_count 00:07:41.775 ************************************ 00:07:41.775 06:37:55 -- common/autotest_common.sh@653 -- # es=22 00:07:41.775 06:37:55 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:41.775 06:37:55 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:41.775 06:37:55 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:41.775 00:07:41.775 real 0m0.072s 00:07:41.775 user 0m0.051s 00:07:41.775 sys 0m0.020s 00:07:41.775 06:37:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:41.775 06:37:55 -- common/autotest_common.sh@10 -- # set +x 00:07:41.775 06:37:55 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:07:41.775 06:37:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:41.775 06:37:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:41.775 06:37:55 -- common/autotest_common.sh@10 -- # set +x 00:07:41.775 ************************************ 00:07:41.775 START TEST dd_invalid_oflag 00:07:41.775 ************************************ 00:07:41.775 06:37:55 -- common/autotest_common.sh@1114 -- # invalid_oflag 00:07:41.775 06:37:55 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:41.775 06:37:55 -- common/autotest_common.sh@650 -- # local es=0 00:07:41.775 06:37:55 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:41.775 06:37:55 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.775 06:37:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.775 06:37:55 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.775 06:37:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.775 06:37:55 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.775 06:37:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.775 06:37:55 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.775 06:37:55 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:41.775 06:37:55 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:42.034 [2024-12-14 06:37:55.787073] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:07:42.034 06:37:55 -- common/autotest_common.sh@653 -- # es=22 00:07:42.034 06:37:55 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:42.034 06:37:55 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:42.034 ************************************ 00:07:42.034 END TEST dd_invalid_oflag 00:07:42.034 ************************************ 00:07:42.034 06:37:55 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:42.034 00:07:42.034 real 0m0.073s 00:07:42.034 user 0m0.046s 00:07:42.034 sys 0m0.027s 00:07:42.034 06:37:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:42.034 06:37:55 -- common/autotest_common.sh@10 -- # set +x 00:07:42.034 06:37:55 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:07:42.034 06:37:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:42.034 06:37:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:42.034 06:37:55 -- common/autotest_common.sh@10 -- # set +x 00:07:42.034 ************************************ 00:07:42.034 START TEST dd_invalid_iflag 00:07:42.034 ************************************ 00:07:42.034 06:37:55 -- common/autotest_common.sh@1114 -- # invalid_iflag 00:07:42.034 06:37:55 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:42.034 06:37:55 -- common/autotest_common.sh@650 -- # local es=0 00:07:42.034 06:37:55 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:42.034 06:37:55 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.034 06:37:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.034 06:37:55 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.034 06:37:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.034 06:37:55 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.034 06:37:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.034 06:37:55 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.034 06:37:55 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:42.034 06:37:55 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:42.034 [2024-12-14 06:37:55.912652] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:07:42.034 06:37:55 -- common/autotest_common.sh@653 -- # es=22 00:07:42.034 06:37:55 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:42.034 06:37:55 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:42.034 06:37:55 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:42.034 00:07:42.034 real 0m0.073s 00:07:42.034 user 0m0.049s 00:07:42.034 sys 0m0.024s 00:07:42.034 06:37:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:42.034 06:37:55 -- common/autotest_common.sh@10 -- # set +x 00:07:42.034 ************************************ 00:07:42.034 END TEST dd_invalid_iflag 00:07:42.034 ************************************ 00:07:42.034 06:37:55 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:07:42.034 06:37:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:42.034 06:37:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:42.034 06:37:55 -- common/autotest_common.sh@10 -- # set +x 00:07:42.034 ************************************ 00:07:42.034 START TEST dd_unknown_flag 00:07:42.034 ************************************ 00:07:42.034 06:37:55 -- common/autotest_common.sh@1114 -- # unknown_flag 00:07:42.034 06:37:55 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:42.034 06:37:55 -- common/autotest_common.sh@650 -- # local es=0 00:07:42.034 06:37:55 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:42.034 06:37:55 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.034 06:37:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.034 06:37:55 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.034 06:37:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.034 06:37:55 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.034 06:37:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.034 06:37:55 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.034 06:37:55 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:42.034 06:37:55 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:42.330 [2024-12-14 06:37:56.040725] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:42.330 [2024-12-14 06:37:56.041238] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59725 ] 00:07:42.330 [2024-12-14 06:37:56.179698] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.330 [2024-12-14 06:37:56.234292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.330 [2024-12-14 06:37:56.277933] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:07:42.330 [2024-12-14 06:37:56.278020] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:07:42.330 [2024-12-14 06:37:56.278031] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:07:42.330 [2024-12-14 06:37:56.278041] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:42.589 [2024-12-14 06:37:56.340835] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:42.589 06:37:56 -- common/autotest_common.sh@653 -- # es=236 00:07:42.589 06:37:56 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:42.589 06:37:56 -- common/autotest_common.sh@662 -- # es=108 00:07:42.589 06:37:56 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:42.589 06:37:56 -- common/autotest_common.sh@670 -- # es=1 00:07:42.589 06:37:56 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:42.589 00:07:42.589 real 0m0.459s 00:07:42.589 user 0m0.262s 00:07:42.589 sys 0m0.092s 00:07:42.589 06:37:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:42.589 ************************************ 00:07:42.589 END TEST dd_unknown_flag 00:07:42.589 ************************************ 00:07:42.589 06:37:56 -- common/autotest_common.sh@10 -- # set +x 00:07:42.589 06:37:56 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:07:42.589 06:37:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:42.589 06:37:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:42.589 06:37:56 -- common/autotest_common.sh@10 -- # set +x 00:07:42.589 ************************************ 00:07:42.589 START TEST dd_invalid_json 00:07:42.589 ************************************ 00:07:42.589 06:37:56 -- common/autotest_common.sh@1114 -- # invalid_json 00:07:42.589 06:37:56 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:42.589 06:37:56 -- dd/negative_dd.sh@95 -- # : 00:07:42.589 06:37:56 -- common/autotest_common.sh@650 -- # local es=0 00:07:42.589 06:37:56 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:42.589 06:37:56 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.589 06:37:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.589 06:37:56 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.589 06:37:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.589 06:37:56 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.589 06:37:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.589 06:37:56 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.590 06:37:56 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:42.590 06:37:56 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:42.590 [2024-12-14 06:37:56.553047] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:42.590 [2024-12-14 06:37:56.553144] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59758 ] 00:07:42.848 [2024-12-14 06:37:56.690226] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.848 [2024-12-14 06:37:56.739583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.848 [2024-12-14 06:37:56.739720] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:07:42.848 [2024-12-14 06:37:56.739739] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:42.848 [2024-12-14 06:37:56.739773] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:42.848 06:37:56 -- common/autotest_common.sh@653 -- # es=234 00:07:42.848 06:37:56 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:42.848 06:37:56 -- common/autotest_common.sh@662 -- # es=106 00:07:42.848 06:37:56 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:42.848 06:37:56 -- common/autotest_common.sh@670 -- # es=1 00:07:42.848 06:37:56 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:42.848 00:07:42.848 real 0m0.341s 00:07:42.848 user 0m0.183s 00:07:42.848 sys 0m0.056s 00:07:42.848 06:37:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:42.848 06:37:56 -- common/autotest_common.sh@10 -- # set +x 00:07:42.848 ************************************ 00:07:42.848 END TEST dd_invalid_json 00:07:42.848 ************************************ 00:07:43.108 ************************************ 00:07:43.108 END TEST spdk_dd_negative 00:07:43.108 ************************************ 00:07:43.108 00:07:43.108 real 0m3.128s 00:07:43.108 user 0m1.587s 00:07:43.108 sys 0m1.149s 00:07:43.108 06:37:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:43.108 06:37:56 -- common/autotest_common.sh@10 -- # set +x 00:07:43.108 ************************************ 00:07:43.108 END TEST spdk_dd 00:07:43.108 ************************************ 00:07:43.108 00:07:43.108 real 1m7.400s 00:07:43.108 user 0m41.940s 00:07:43.108 sys 0m16.301s 00:07:43.108 06:37:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:43.108 06:37:56 -- common/autotest_common.sh@10 -- # set +x 00:07:43.108 06:37:56 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:07:43.108 06:37:56 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:07:43.108 06:37:56 -- spdk/autotest.sh@255 -- # timing_exit lib 00:07:43.108 06:37:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:43.108 06:37:56 -- common/autotest_common.sh@10 -- # set +x 00:07:43.108 06:37:57 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:07:43.108 06:37:57 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:07:43.108 06:37:57 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:07:43.108 06:37:57 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:07:43.108 06:37:57 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:07:43.108 06:37:57 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:07:43.108 06:37:57 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:43.108 06:37:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:43.108 06:37:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.108 06:37:57 -- common/autotest_common.sh@10 -- # set +x 00:07:43.108 ************************************ 00:07:43.108 START TEST nvmf_tcp 00:07:43.108 ************************************ 00:07:43.108 06:37:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:43.108 * Looking for test storage... 00:07:43.108 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:43.108 06:37:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:43.108 06:37:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:43.108 06:37:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:43.367 06:37:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:43.367 06:37:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:43.367 06:37:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:43.367 06:37:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:43.367 06:37:57 -- scripts/common.sh@335 -- # IFS=.-: 00:07:43.367 06:37:57 -- scripts/common.sh@335 -- # read -ra ver1 00:07:43.367 06:37:57 -- scripts/common.sh@336 -- # IFS=.-: 00:07:43.367 06:37:57 -- scripts/common.sh@336 -- # read -ra ver2 00:07:43.367 06:37:57 -- scripts/common.sh@337 -- # local 'op=<' 00:07:43.367 06:37:57 -- scripts/common.sh@339 -- # ver1_l=2 00:07:43.367 06:37:57 -- scripts/common.sh@340 -- # ver2_l=1 00:07:43.367 06:37:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:43.367 06:37:57 -- scripts/common.sh@343 -- # case "$op" in 00:07:43.367 06:37:57 -- scripts/common.sh@344 -- # : 1 00:07:43.367 06:37:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:43.367 06:37:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:43.367 06:37:57 -- scripts/common.sh@364 -- # decimal 1 00:07:43.367 06:37:57 -- scripts/common.sh@352 -- # local d=1 00:07:43.367 06:37:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:43.367 06:37:57 -- scripts/common.sh@354 -- # echo 1 00:07:43.367 06:37:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:43.367 06:37:57 -- scripts/common.sh@365 -- # decimal 2 00:07:43.367 06:37:57 -- scripts/common.sh@352 -- # local d=2 00:07:43.367 06:37:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:43.367 06:37:57 -- scripts/common.sh@354 -- # echo 2 00:07:43.367 06:37:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:43.367 06:37:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:43.367 06:37:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:43.367 06:37:57 -- scripts/common.sh@367 -- # return 0 00:07:43.367 06:37:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:43.367 06:37:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:43.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.367 --rc genhtml_branch_coverage=1 00:07:43.367 --rc genhtml_function_coverage=1 00:07:43.367 --rc genhtml_legend=1 00:07:43.367 --rc geninfo_all_blocks=1 00:07:43.367 --rc geninfo_unexecuted_blocks=1 00:07:43.367 00:07:43.367 ' 00:07:43.367 06:37:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:43.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.367 --rc genhtml_branch_coverage=1 00:07:43.367 --rc genhtml_function_coverage=1 00:07:43.367 --rc genhtml_legend=1 00:07:43.367 --rc geninfo_all_blocks=1 00:07:43.367 --rc geninfo_unexecuted_blocks=1 00:07:43.367 00:07:43.367 ' 00:07:43.367 06:37:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:43.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.367 --rc genhtml_branch_coverage=1 00:07:43.367 --rc genhtml_function_coverage=1 00:07:43.367 --rc genhtml_legend=1 00:07:43.367 --rc geninfo_all_blocks=1 00:07:43.367 --rc geninfo_unexecuted_blocks=1 00:07:43.367 00:07:43.367 ' 00:07:43.367 06:37:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:43.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.367 --rc genhtml_branch_coverage=1 00:07:43.367 --rc genhtml_function_coverage=1 00:07:43.367 --rc genhtml_legend=1 00:07:43.368 --rc geninfo_all_blocks=1 00:07:43.368 --rc geninfo_unexecuted_blocks=1 00:07:43.368 00:07:43.368 ' 00:07:43.368 06:37:57 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:43.368 06:37:57 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:43.368 06:37:57 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:43.368 06:37:57 -- nvmf/common.sh@7 -- # uname -s 00:07:43.368 06:37:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.368 06:37:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.368 06:37:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.368 06:37:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.368 06:37:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.368 06:37:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.368 06:37:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.368 06:37:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.368 06:37:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.368 06:37:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.368 06:37:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 00:07:43.368 06:37:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=1897a557-42a7-4044-982a-fbab8b2b3e32 00:07:43.368 06:37:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.368 06:37:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.368 06:37:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:43.368 06:37:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:43.368 06:37:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.368 06:37:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.368 06:37:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.368 06:37:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.368 06:37:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.368 06:37:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.368 06:37:57 -- paths/export.sh@5 -- # export PATH 00:07:43.368 06:37:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.368 06:37:57 -- nvmf/common.sh@46 -- # : 0 00:07:43.368 06:37:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:43.368 06:37:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:43.368 06:37:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:43.368 06:37:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.368 06:37:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.368 06:37:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:43.368 06:37:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:43.368 06:37:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:43.368 06:37:57 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:43.368 06:37:57 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:43.368 06:37:57 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:43.368 06:37:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:43.368 06:37:57 -- common/autotest_common.sh@10 -- # set +x 00:07:43.368 06:37:57 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:07:43.368 06:37:57 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:43.368 06:37:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:43.368 06:37:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.368 06:37:57 -- common/autotest_common.sh@10 -- # set +x 00:07:43.368 ************************************ 00:07:43.368 START TEST nvmf_host_management 00:07:43.368 ************************************ 00:07:43.368 06:37:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:43.368 * Looking for test storage... 00:07:43.368 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:43.368 06:37:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:43.368 06:37:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:43.368 06:37:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:43.628 06:37:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:43.628 06:37:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:43.628 06:37:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:43.628 06:37:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:43.628 06:37:57 -- scripts/common.sh@335 -- # IFS=.-: 00:07:43.628 06:37:57 -- scripts/common.sh@335 -- # read -ra ver1 00:07:43.628 06:37:57 -- scripts/common.sh@336 -- # IFS=.-: 00:07:43.628 06:37:57 -- scripts/common.sh@336 -- # read -ra ver2 00:07:43.628 06:37:57 -- scripts/common.sh@337 -- # local 'op=<' 00:07:43.628 06:37:57 -- scripts/common.sh@339 -- # ver1_l=2 00:07:43.628 06:37:57 -- scripts/common.sh@340 -- # ver2_l=1 00:07:43.628 06:37:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:43.628 06:37:57 -- scripts/common.sh@343 -- # case "$op" in 00:07:43.628 06:37:57 -- scripts/common.sh@344 -- # : 1 00:07:43.628 06:37:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:43.628 06:37:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:43.628 06:37:57 -- scripts/common.sh@364 -- # decimal 1 00:07:43.628 06:37:57 -- scripts/common.sh@352 -- # local d=1 00:07:43.628 06:37:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:43.628 06:37:57 -- scripts/common.sh@354 -- # echo 1 00:07:43.628 06:37:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:43.628 06:37:57 -- scripts/common.sh@365 -- # decimal 2 00:07:43.628 06:37:57 -- scripts/common.sh@352 -- # local d=2 00:07:43.628 06:37:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:43.628 06:37:57 -- scripts/common.sh@354 -- # echo 2 00:07:43.628 06:37:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:43.628 06:37:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:43.628 06:37:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:43.628 06:37:57 -- scripts/common.sh@367 -- # return 0 00:07:43.628 06:37:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:43.628 06:37:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:43.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.628 --rc genhtml_branch_coverage=1 00:07:43.628 --rc genhtml_function_coverage=1 00:07:43.628 --rc genhtml_legend=1 00:07:43.628 --rc geninfo_all_blocks=1 00:07:43.628 --rc geninfo_unexecuted_blocks=1 00:07:43.628 00:07:43.628 ' 00:07:43.628 06:37:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:43.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.628 --rc genhtml_branch_coverage=1 00:07:43.628 --rc genhtml_function_coverage=1 00:07:43.628 --rc genhtml_legend=1 00:07:43.628 --rc geninfo_all_blocks=1 00:07:43.628 --rc geninfo_unexecuted_blocks=1 00:07:43.628 00:07:43.628 ' 00:07:43.628 06:37:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:43.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.628 --rc genhtml_branch_coverage=1 00:07:43.628 --rc genhtml_function_coverage=1 00:07:43.628 --rc genhtml_legend=1 00:07:43.628 --rc geninfo_all_blocks=1 00:07:43.628 --rc geninfo_unexecuted_blocks=1 00:07:43.628 00:07:43.628 ' 00:07:43.628 06:37:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:43.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.628 --rc genhtml_branch_coverage=1 00:07:43.628 --rc genhtml_function_coverage=1 00:07:43.628 --rc genhtml_legend=1 00:07:43.628 --rc geninfo_all_blocks=1 00:07:43.628 --rc geninfo_unexecuted_blocks=1 00:07:43.628 00:07:43.628 ' 00:07:43.628 06:37:57 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:43.628 06:37:57 -- nvmf/common.sh@7 -- # uname -s 00:07:43.628 06:37:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.628 06:37:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.628 06:37:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.628 06:37:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.628 06:37:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.628 06:37:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.628 06:37:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.628 06:37:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.628 06:37:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.628 06:37:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.628 06:37:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 00:07:43.628 06:37:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=1897a557-42a7-4044-982a-fbab8b2b3e32 00:07:43.628 06:37:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.628 06:37:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.628 06:37:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:43.628 06:37:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:43.628 06:37:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.628 06:37:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.628 06:37:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.628 06:37:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.628 06:37:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.628 06:37:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.628 06:37:57 -- paths/export.sh@5 -- # export PATH 00:07:43.628 06:37:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.628 06:37:57 -- nvmf/common.sh@46 -- # : 0 00:07:43.628 06:37:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:43.628 06:37:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:43.628 06:37:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:43.628 06:37:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.628 06:37:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.628 06:37:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:43.628 06:37:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:43.628 06:37:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:43.628 06:37:57 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:43.628 06:37:57 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:43.628 06:37:57 -- target/host_management.sh@104 -- # nvmftestinit 00:07:43.628 06:37:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:43.628 06:37:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:43.628 06:37:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:43.628 06:37:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:43.628 06:37:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:43.628 06:37:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.628 06:37:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:43.628 06:37:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.628 06:37:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:43.628 06:37:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:43.628 06:37:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:43.628 06:37:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:43.628 06:37:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:43.628 06:37:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:43.628 06:37:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:43.628 06:37:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:43.628 06:37:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:43.628 06:37:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:43.628 06:37:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:43.628 06:37:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:43.628 06:37:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:43.628 06:37:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:43.628 06:37:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:43.628 06:37:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:43.628 06:37:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:43.628 06:37:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:43.628 06:37:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:43.628 Cannot find device "nvmf_init_br" 00:07:43.628 06:37:57 -- nvmf/common.sh@153 -- # true 00:07:43.628 06:37:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:43.628 Cannot find device "nvmf_tgt_br" 00:07:43.628 06:37:57 -- nvmf/common.sh@154 -- # true 00:07:43.628 06:37:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:43.629 Cannot find device "nvmf_tgt_br2" 00:07:43.629 06:37:57 -- nvmf/common.sh@155 -- # true 00:07:43.629 06:37:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:43.629 Cannot find device "nvmf_init_br" 00:07:43.629 06:37:57 -- nvmf/common.sh@156 -- # true 00:07:43.629 06:37:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:43.629 Cannot find device "nvmf_tgt_br" 00:07:43.629 06:37:57 -- nvmf/common.sh@157 -- # true 00:07:43.629 06:37:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:43.629 Cannot find device "nvmf_tgt_br2" 00:07:43.629 06:37:57 -- nvmf/common.sh@158 -- # true 00:07:43.629 06:37:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:43.629 Cannot find device "nvmf_br" 00:07:43.629 06:37:57 -- nvmf/common.sh@159 -- # true 00:07:43.629 06:37:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:43.629 Cannot find device "nvmf_init_if" 00:07:43.629 06:37:57 -- nvmf/common.sh@160 -- # true 00:07:43.629 06:37:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:43.629 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:43.629 06:37:57 -- nvmf/common.sh@161 -- # true 00:07:43.629 06:37:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:43.629 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:43.629 06:37:57 -- nvmf/common.sh@162 -- # true 00:07:43.629 06:37:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:43.629 06:37:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:43.629 06:37:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:43.629 06:37:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:43.629 06:37:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:43.629 06:37:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:43.888 06:37:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:43.888 06:37:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:43.888 06:37:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:43.888 06:37:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:43.888 06:37:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:43.888 06:37:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:43.888 06:37:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:43.888 06:37:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:43.888 06:37:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:43.888 06:37:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:43.888 06:37:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:43.888 06:37:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:43.888 06:37:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:43.888 06:37:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:43.888 06:37:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:43.888 06:37:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:43.888 06:37:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:43.888 06:37:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:43.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:43.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:07:43.888 00:07:43.888 --- 10.0.0.2 ping statistics --- 00:07:43.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.888 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:07:43.888 06:37:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:43.888 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:43.888 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:07:43.888 00:07:43.888 --- 10.0.0.3 ping statistics --- 00:07:43.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.888 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:07:43.888 06:37:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:43.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:43.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:07:43.888 00:07:43.888 --- 10.0.0.1 ping statistics --- 00:07:43.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.888 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:07:43.888 06:37:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:43.888 06:37:57 -- nvmf/common.sh@421 -- # return 0 00:07:43.888 06:37:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:43.888 06:37:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:43.888 06:37:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:43.888 06:37:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:43.888 06:37:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:43.888 06:37:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:43.888 06:37:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:43.888 06:37:57 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:07:43.888 06:37:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:43.888 06:37:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.888 06:37:57 -- common/autotest_common.sh@10 -- # set +x 00:07:43.888 ************************************ 00:07:43.888 START TEST nvmf_host_management 00:07:43.888 ************************************ 00:07:43.888 06:37:57 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:07:43.888 06:37:57 -- target/host_management.sh@69 -- # starttarget 00:07:43.888 06:37:57 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:43.888 06:37:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:43.888 06:37:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:43.888 06:37:57 -- common/autotest_common.sh@10 -- # set +x 00:07:43.888 06:37:57 -- nvmf/common.sh@469 -- # nvmfpid=60023 00:07:43.888 06:37:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:43.888 06:37:57 -- nvmf/common.sh@470 -- # waitforlisten 60023 00:07:43.888 06:37:57 -- common/autotest_common.sh@829 -- # '[' -z 60023 ']' 00:07:43.888 06:37:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.888 06:37:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:43.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.888 06:37:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.888 06:37:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:43.888 06:37:57 -- common/autotest_common.sh@10 -- # set +x 00:07:44.147 [2024-12-14 06:37:57.904611] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:44.147 [2024-12-14 06:37:57.904708] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.147 [2024-12-14 06:37:58.042481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.147 [2024-12-14 06:37:58.112875] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:44.147 [2024-12-14 06:37:58.113312] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.147 [2024-12-14 06:37:58.113457] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.147 [2024-12-14 06:37:58.113636] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.147 [2024-12-14 06:37:58.114095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.147 [2024-12-14 06:37:58.114169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.147 [2024-12-14 06:37:58.114370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.147 [2024-12-14 06:37:58.114275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:45.083 06:37:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:45.083 06:37:58 -- common/autotest_common.sh@862 -- # return 0 00:07:45.083 06:37:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:45.083 06:37:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:45.083 06:37:58 -- common/autotest_common.sh@10 -- # set +x 00:07:45.083 06:37:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:45.083 06:37:58 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:45.083 06:37:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.083 06:37:58 -- common/autotest_common.sh@10 -- # set +x 00:07:45.083 [2024-12-14 06:37:58.955599] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:45.083 06:37:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.083 06:37:58 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:45.083 06:37:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:45.083 06:37:58 -- common/autotest_common.sh@10 -- # set +x 00:07:45.083 06:37:58 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:45.083 06:37:58 -- target/host_management.sh@23 -- # cat 00:07:45.083 06:37:58 -- target/host_management.sh@30 -- # rpc_cmd 00:07:45.083 06:37:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.083 06:37:58 -- common/autotest_common.sh@10 -- # set +x 00:07:45.083 Malloc0 00:07:45.083 [2024-12-14 06:37:59.024753] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.083 06:37:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.083 06:37:59 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:45.083 06:37:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:45.083 06:37:59 -- common/autotest_common.sh@10 -- # set +x 00:07:45.342 06:37:59 -- target/host_management.sh@73 -- # perfpid=60083 00:07:45.342 06:37:59 -- target/host_management.sh@74 -- # waitforlisten 60083 /var/tmp/bdevperf.sock 00:07:45.342 06:37:59 -- common/autotest_common.sh@829 -- # '[' -z 60083 ']' 00:07:45.342 06:37:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:45.342 06:37:59 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:45.342 06:37:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:45.342 06:37:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:45.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:45.342 06:37:59 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:45.342 06:37:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:45.342 06:37:59 -- common/autotest_common.sh@10 -- # set +x 00:07:45.342 06:37:59 -- nvmf/common.sh@520 -- # config=() 00:07:45.342 06:37:59 -- nvmf/common.sh@520 -- # local subsystem config 00:07:45.342 06:37:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:07:45.342 06:37:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:07:45.342 { 00:07:45.342 "params": { 00:07:45.342 "name": "Nvme$subsystem", 00:07:45.342 "trtype": "$TEST_TRANSPORT", 00:07:45.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:45.342 "adrfam": "ipv4", 00:07:45.342 "trsvcid": "$NVMF_PORT", 00:07:45.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:45.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:45.342 "hdgst": ${hdgst:-false}, 00:07:45.342 "ddgst": ${ddgst:-false} 00:07:45.342 }, 00:07:45.342 "method": "bdev_nvme_attach_controller" 00:07:45.342 } 00:07:45.342 EOF 00:07:45.342 )") 00:07:45.342 06:37:59 -- nvmf/common.sh@542 -- # cat 00:07:45.342 06:37:59 -- nvmf/common.sh@544 -- # jq . 00:07:45.342 06:37:59 -- nvmf/common.sh@545 -- # IFS=, 00:07:45.342 06:37:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:07:45.342 "params": { 00:07:45.342 "name": "Nvme0", 00:07:45.342 "trtype": "tcp", 00:07:45.342 "traddr": "10.0.0.2", 00:07:45.342 "adrfam": "ipv4", 00:07:45.342 "trsvcid": "4420", 00:07:45.342 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:45.342 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:45.342 "hdgst": false, 00:07:45.342 "ddgst": false 00:07:45.342 }, 00:07:45.342 "method": "bdev_nvme_attach_controller" 00:07:45.342 }' 00:07:45.342 [2024-12-14 06:37:59.121549] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:45.342 [2024-12-14 06:37:59.121646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60083 ] 00:07:45.342 [2024-12-14 06:37:59.254817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.342 [2024-12-14 06:37:59.311761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.601 Running I/O for 10 seconds... 00:07:46.539 06:38:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:46.539 06:38:00 -- common/autotest_common.sh@862 -- # return 0 00:07:46.539 06:38:00 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:46.539 06:38:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.539 06:38:00 -- common/autotest_common.sh@10 -- # set +x 00:07:46.539 06:38:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.539 06:38:00 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:46.539 06:38:00 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:46.539 06:38:00 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:46.539 06:38:00 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:46.539 06:38:00 -- target/host_management.sh@52 -- # local ret=1 00:07:46.539 06:38:00 -- target/host_management.sh@53 -- # local i 00:07:46.539 06:38:00 -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:46.539 06:38:00 -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:46.539 06:38:00 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:46.539 06:38:00 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:46.539 06:38:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.539 06:38:00 -- common/autotest_common.sh@10 -- # set +x 00:07:46.539 06:38:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.539 06:38:00 -- target/host_management.sh@55 -- # read_io_count=2128 00:07:46.539 06:38:00 -- target/host_management.sh@58 -- # '[' 2128 -ge 100 ']' 00:07:46.539 06:38:00 -- target/host_management.sh@59 -- # ret=0 00:07:46.539 06:38:00 -- target/host_management.sh@60 -- # break 00:07:46.539 06:38:00 -- target/host_management.sh@64 -- # return 0 00:07:46.539 06:38:00 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:46.539 06:38:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.539 06:38:00 -- common/autotest_common.sh@10 -- # set +x 00:07:46.539 [2024-12-14 06:38:00.247346] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d00 is same with the state(5) to be set 00:07:46.539 [2024-12-14 06:38:00.247407] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d00 is same with the state(5) to be set 00:07:46.539 [2024-12-14 06:38:00.247417] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d00 is same with the state(5) to be set 00:07:46.539 [2024-12-14 06:38:00.247425] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d00 is same with the state(5) to be set 00:07:46.539 [2024-12-14 06:38:00.247433] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d00 is same with the state(5) to be set 00:07:46.540 [2024-12-14 06:38:00.247442] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d00 is same with the state(5) to be set 00:07:46.540 [2024-12-14 06:38:00.247450] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d00 is same with the state(5) to be set 00:07:46.540 [2024-12-14 06:38:00.247458] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d00 is same with the state(5) to be set 00:07:46.540 [2024-12-14 06:38:00.247465] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d00 is same with the state(5) to be set 00:07:46.540 [2024-12-14 06:38:00.247473] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d00 is same with the state(5) to be set 00:07:46.540 [2024-12-14 06:38:00.247481] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac0d00 is same with the state(5) to be set 00:07:46.540 [2024-12-14 06:38:00.247564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.247594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.247647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.247658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.247670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.247680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.247692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.247701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.247712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.247722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.247733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.247742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.247753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.247763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.247774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.247783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.247794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.247804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.247815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.247824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.247835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.247844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.247855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.247865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.247877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.247890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.247902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.247911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.247922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.247931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.247943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.247952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.247963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.247987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.247999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.248008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.248020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.248029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.248040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.248050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.248069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.248078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.248089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.248099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.248110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.248119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.248131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.248140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.248151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.248160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.248172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.248181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.248192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.248202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.248214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.248238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.248251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.248263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.248274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.248284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.248295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.248304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.248316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.248325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.248336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.248346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.248357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.248366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.248378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.248387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.248398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.248407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.248418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.540 [2024-12-14 06:38:00.248428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.540 [2024-12-14 06:38:00.248439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.541 [2024-12-14 06:38:00.248448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.541 [2024-12-14 06:38:00.248459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.541 [2024-12-14 06:38:00.248468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.541 [2024-12-14 06:38:00.248480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.541 [2024-12-14 06:38:00.248489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.541 [2024-12-14 06:38:00.248500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.541 [2024-12-14 06:38:00.248510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.541 [2024-12-14 06:38:00.248521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.541 [2024-12-14 06:38:00.248530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.541 [2024-12-14 06:38:00.248541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.541 [2024-12-14 06:38:00.248550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.541 [2024-12-14 06:38:00.248562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.541 [2024-12-14 06:38:00.248571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.541 [2024-12-14 06:38:00.248583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.541 [2024-12-14 06:38:00.248594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.541 [2024-12-14 06:38:00.248605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.541 [2024-12-14 06:38:00.248615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.541 [2024-12-14 06:38:00.248626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.541 [2024-12-14 06:38:00.248636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.541 [2024-12-14 06:38:00.248647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.541 [2024-12-14 06:38:00.248657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.541 [2024-12-14 06:38:00.248668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.541 [2024-12-14 06:38:00.248677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.541 [2024-12-14 06:38:00.248689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.541 [2024-12-14 06:38:00.248698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.541 [2024-12-14 06:38:00.248709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.541 [2024-12-14 06:38:00.248719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.541 [2024-12-14 06:38:00.248730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.541 [2024-12-14 06:38:00.248739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.541 [2024-12-14 06:38:00.248751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.541 [2024-12-14 06:38:00.248760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.541 [2024-12-14 06:38:00.248771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.541 [2024-12-14 06:38:00.248781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.541 [2024-12-14 06:38:00.248792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.541 [2024-12-14 06:38:00.248802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.541 [2024-12-14 06:38:00.248813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.541 [2024-12-14 06:38:00.248823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.541 [2024-12-14 06:38:00.248834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.541 [2024-12-14 06:38:00.248843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.541 [2024-12-14 06:38:00.248854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.541 [2024-12-14 06:38:00.248864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.541 [2024-12-14 06:38:00.248875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.541 [2024-12-14 06:38:00.248896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.541 [2024-12-14 06:38:00.248908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.541 [2024-12-14 06:38:00.248920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.541 [2024-12-14 06:38:00.248931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.541 [2024-12-14 06:38:00.248948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.541 [2024-12-14 06:38:00.248961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.541 [2024-12-14 06:38:00.248970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.541 [2024-12-14 06:38:00.248981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.541 [2024-12-14 06:38:00.248991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.541 [2024-12-14 06:38:00.249002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.541 [2024-12-14 06:38:00.249012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.541 [2024-12-14 06:38:00.249022] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x156c400 is same with the state(5) to be set 00:07:46.541 [2024-12-14 06:38:00.249071] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x156c400 was disconnected and freed. reset controller. 00:07:46.541 [2024-12-14 06:38:00.250232] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:46.541 06:38:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.541 06:38:00 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:46.541 06:38:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.541 06:38:00 -- common/autotest_common.sh@10 -- # set +x 00:07:46.541 task offset: 29568 on job bdev=Nvme0n1 fails 00:07:46.541 00:07:46.541 Latency(us) 00:07:46.541 [2024-12-14T06:38:00.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.541 [2024-12-14T06:38:00.533Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:46.541 [2024-12-14T06:38:00.533Z] Job: Nvme0n1 ended in about 0.80 seconds with error 00:07:46.541 Verification LBA range: start 0x0 length 0x400 00:07:46.541 Nvme0n1 : 0.80 2809.02 175.56 79.79 0.00 21843.12 5779.08 27763.43 00:07:46.541 [2024-12-14T06:38:00.533Z] =================================================================================================================== 00:07:46.541 [2024-12-14T06:38:00.533Z] Total : 2809.02 175.56 79.79 0.00 21843.12 5779.08 27763.43 00:07:46.541 [2024-12-14 06:38:00.252319] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:46.541 [2024-12-14 06:38:00.252348] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1592150 (9): Bad file descriptor 00:07:46.541 [2024-12-14 06:38:00.257281] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:46.541 06:38:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.541 06:38:00 -- target/host_management.sh@87 -- # sleep 1 00:07:47.476 06:38:01 -- target/host_management.sh@91 -- # kill -9 60083 00:07:47.476 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (60083) - No such process 00:07:47.476 06:38:01 -- target/host_management.sh@91 -- # true 00:07:47.476 06:38:01 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:47.476 06:38:01 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:47.476 06:38:01 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:47.476 06:38:01 -- nvmf/common.sh@520 -- # config=() 00:07:47.476 06:38:01 -- nvmf/common.sh@520 -- # local subsystem config 00:07:47.476 06:38:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:07:47.476 06:38:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:07:47.476 { 00:07:47.476 "params": { 00:07:47.476 "name": "Nvme$subsystem", 00:07:47.476 "trtype": "$TEST_TRANSPORT", 00:07:47.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:47.476 "adrfam": "ipv4", 00:07:47.476 "trsvcid": "$NVMF_PORT", 00:07:47.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:47.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:47.476 "hdgst": ${hdgst:-false}, 00:07:47.476 "ddgst": ${ddgst:-false} 00:07:47.476 }, 00:07:47.476 "method": "bdev_nvme_attach_controller" 00:07:47.476 } 00:07:47.476 EOF 00:07:47.476 )") 00:07:47.476 06:38:01 -- nvmf/common.sh@542 -- # cat 00:07:47.476 06:38:01 -- nvmf/common.sh@544 -- # jq . 00:07:47.476 06:38:01 -- nvmf/common.sh@545 -- # IFS=, 00:07:47.476 06:38:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:07:47.476 "params": { 00:07:47.476 "name": "Nvme0", 00:07:47.476 "trtype": "tcp", 00:07:47.476 "traddr": "10.0.0.2", 00:07:47.476 "adrfam": "ipv4", 00:07:47.476 "trsvcid": "4420", 00:07:47.476 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:47.476 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:47.476 "hdgst": false, 00:07:47.476 "ddgst": false 00:07:47.476 }, 00:07:47.476 "method": "bdev_nvme_attach_controller" 00:07:47.476 }' 00:07:47.476 [2024-12-14 06:38:01.309669] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:47.476 [2024-12-14 06:38:01.309735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60127 ] 00:07:47.476 [2024-12-14 06:38:01.447917] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.734 [2024-12-14 06:38:01.506488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.734 Running I/O for 1 seconds... 00:07:48.671 00:07:48.671 Latency(us) 00:07:48.671 [2024-12-14T06:38:02.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.671 [2024-12-14T06:38:02.663Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:48.671 Verification LBA range: start 0x0 length 0x400 00:07:48.671 Nvme0n1 : 1.01 3018.87 188.68 0.00 0.00 20871.69 1131.99 25499.46 00:07:48.671 [2024-12-14T06:38:02.663Z] =================================================================================================================== 00:07:48.671 [2024-12-14T06:38:02.663Z] Total : 3018.87 188.68 0.00 0.00 20871.69 1131.99 25499.46 00:07:48.930 06:38:02 -- target/host_management.sh@101 -- # stoptarget 00:07:48.930 06:38:02 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:48.930 06:38:02 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:48.930 06:38:02 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:48.930 06:38:02 -- target/host_management.sh@40 -- # nvmftestfini 00:07:48.930 06:38:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:48.930 06:38:02 -- nvmf/common.sh@116 -- # sync 00:07:48.930 06:38:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:48.930 06:38:02 -- nvmf/common.sh@119 -- # set +e 00:07:48.930 06:38:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:48.930 06:38:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:48.930 rmmod nvme_tcp 00:07:49.189 rmmod nvme_fabrics 00:07:49.189 rmmod nvme_keyring 00:07:49.189 06:38:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:49.189 06:38:02 -- nvmf/common.sh@123 -- # set -e 00:07:49.189 06:38:02 -- nvmf/common.sh@124 -- # return 0 00:07:49.189 06:38:02 -- nvmf/common.sh@477 -- # '[' -n 60023 ']' 00:07:49.189 06:38:02 -- nvmf/common.sh@478 -- # killprocess 60023 00:07:49.189 06:38:02 -- common/autotest_common.sh@936 -- # '[' -z 60023 ']' 00:07:49.189 06:38:02 -- common/autotest_common.sh@940 -- # kill -0 60023 00:07:49.189 06:38:02 -- common/autotest_common.sh@941 -- # uname 00:07:49.189 06:38:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:49.189 06:38:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60023 00:07:49.189 killing process with pid 60023 00:07:49.189 06:38:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:07:49.189 06:38:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:07:49.189 06:38:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60023' 00:07:49.189 06:38:02 -- common/autotest_common.sh@955 -- # kill 60023 00:07:49.189 06:38:02 -- common/autotest_common.sh@960 -- # wait 60023 00:07:49.189 [2024-12-14 06:38:03.146133] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:49.189 06:38:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:49.189 06:38:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:49.189 06:38:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:49.189 06:38:03 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:49.189 06:38:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:49.189 06:38:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.189 06:38:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:49.189 06:38:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.448 06:38:03 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:07:49.448 ************************************ 00:07:49.448 END TEST nvmf_host_management 00:07:49.448 ************************************ 00:07:49.448 00:07:49.448 real 0m5.361s 00:07:49.448 user 0m22.846s 00:07:49.448 sys 0m1.134s 00:07:49.448 06:38:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.448 06:38:03 -- common/autotest_common.sh@10 -- # set +x 00:07:49.448 06:38:03 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:07:49.448 ************************************ 00:07:49.448 END TEST nvmf_host_management 00:07:49.448 ************************************ 00:07:49.448 00:07:49.448 real 0m6.019s 00:07:49.448 user 0m23.050s 00:07:49.448 sys 0m1.369s 00:07:49.448 06:38:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.448 06:38:03 -- common/autotest_common.sh@10 -- # set +x 00:07:49.448 06:38:03 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:49.448 06:38:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:49.448 06:38:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.448 06:38:03 -- common/autotest_common.sh@10 -- # set +x 00:07:49.448 ************************************ 00:07:49.448 START TEST nvmf_lvol 00:07:49.448 ************************************ 00:07:49.448 06:38:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:49.448 * Looking for test storage... 00:07:49.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:49.448 06:38:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:49.448 06:38:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:49.448 06:38:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:49.707 06:38:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:49.707 06:38:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:49.707 06:38:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:49.708 06:38:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:49.708 06:38:03 -- scripts/common.sh@335 -- # IFS=.-: 00:07:49.708 06:38:03 -- scripts/common.sh@335 -- # read -ra ver1 00:07:49.708 06:38:03 -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.708 06:38:03 -- scripts/common.sh@336 -- # read -ra ver2 00:07:49.708 06:38:03 -- scripts/common.sh@337 -- # local 'op=<' 00:07:49.708 06:38:03 -- scripts/common.sh@339 -- # ver1_l=2 00:07:49.708 06:38:03 -- scripts/common.sh@340 -- # ver2_l=1 00:07:49.708 06:38:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:49.708 06:38:03 -- scripts/common.sh@343 -- # case "$op" in 00:07:49.708 06:38:03 -- scripts/common.sh@344 -- # : 1 00:07:49.708 06:38:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:49.708 06:38:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.708 06:38:03 -- scripts/common.sh@364 -- # decimal 1 00:07:49.708 06:38:03 -- scripts/common.sh@352 -- # local d=1 00:07:49.708 06:38:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.708 06:38:03 -- scripts/common.sh@354 -- # echo 1 00:07:49.708 06:38:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:49.708 06:38:03 -- scripts/common.sh@365 -- # decimal 2 00:07:49.708 06:38:03 -- scripts/common.sh@352 -- # local d=2 00:07:49.708 06:38:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.708 06:38:03 -- scripts/common.sh@354 -- # echo 2 00:07:49.708 06:38:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:49.708 06:38:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:49.708 06:38:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:49.708 06:38:03 -- scripts/common.sh@367 -- # return 0 00:07:49.708 06:38:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.708 06:38:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:49.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.708 --rc genhtml_branch_coverage=1 00:07:49.708 --rc genhtml_function_coverage=1 00:07:49.708 --rc genhtml_legend=1 00:07:49.708 --rc geninfo_all_blocks=1 00:07:49.708 --rc geninfo_unexecuted_blocks=1 00:07:49.708 00:07:49.708 ' 00:07:49.708 06:38:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:49.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.708 --rc genhtml_branch_coverage=1 00:07:49.708 --rc genhtml_function_coverage=1 00:07:49.708 --rc genhtml_legend=1 00:07:49.708 --rc geninfo_all_blocks=1 00:07:49.708 --rc geninfo_unexecuted_blocks=1 00:07:49.708 00:07:49.708 ' 00:07:49.708 06:38:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:49.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.708 --rc genhtml_branch_coverage=1 00:07:49.708 --rc genhtml_function_coverage=1 00:07:49.708 --rc genhtml_legend=1 00:07:49.708 --rc geninfo_all_blocks=1 00:07:49.708 --rc geninfo_unexecuted_blocks=1 00:07:49.708 00:07:49.708 ' 00:07:49.708 06:38:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:49.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.708 --rc genhtml_branch_coverage=1 00:07:49.708 --rc genhtml_function_coverage=1 00:07:49.708 --rc genhtml_legend=1 00:07:49.708 --rc geninfo_all_blocks=1 00:07:49.708 --rc geninfo_unexecuted_blocks=1 00:07:49.708 00:07:49.708 ' 00:07:49.708 06:38:03 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:49.708 06:38:03 -- nvmf/common.sh@7 -- # uname -s 00:07:49.708 06:38:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:49.708 06:38:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:49.708 06:38:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:49.708 06:38:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:49.708 06:38:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:49.708 06:38:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:49.708 06:38:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:49.708 06:38:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:49.708 06:38:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:49.708 06:38:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:49.708 06:38:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 00:07:49.708 06:38:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=1897a557-42a7-4044-982a-fbab8b2b3e32 00:07:49.708 06:38:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:49.708 06:38:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:49.708 06:38:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:49.708 06:38:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:49.708 06:38:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.708 06:38:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.708 06:38:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.708 06:38:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.708 06:38:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.708 06:38:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.708 06:38:03 -- paths/export.sh@5 -- # export PATH 00:07:49.708 06:38:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.708 06:38:03 -- nvmf/common.sh@46 -- # : 0 00:07:49.708 06:38:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:49.708 06:38:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:49.708 06:38:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:49.708 06:38:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:49.708 06:38:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:49.708 06:38:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:49.708 06:38:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:49.708 06:38:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:49.708 06:38:03 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:49.708 06:38:03 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:49.708 06:38:03 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:49.708 06:38:03 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:49.708 06:38:03 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:49.708 06:38:03 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:49.708 06:38:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:49.708 06:38:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:49.708 06:38:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:49.708 06:38:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:49.708 06:38:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:49.708 06:38:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.708 06:38:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:49.708 06:38:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.708 06:38:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:49.708 06:38:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:49.708 06:38:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:49.708 06:38:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:49.708 06:38:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:49.708 06:38:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:49.708 06:38:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:49.708 06:38:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:49.708 06:38:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:49.708 06:38:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:49.708 06:38:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:49.708 06:38:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:49.708 06:38:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:49.709 06:38:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:49.709 06:38:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:49.709 06:38:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:49.709 06:38:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:49.709 06:38:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:49.709 06:38:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:49.709 06:38:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:49.709 Cannot find device "nvmf_tgt_br" 00:07:49.709 06:38:03 -- nvmf/common.sh@154 -- # true 00:07:49.709 06:38:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:49.709 Cannot find device "nvmf_tgt_br2" 00:07:49.709 06:38:03 -- nvmf/common.sh@155 -- # true 00:07:49.709 06:38:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:49.709 06:38:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:49.709 Cannot find device "nvmf_tgt_br" 00:07:49.709 06:38:03 -- nvmf/common.sh@157 -- # true 00:07:49.709 06:38:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:49.709 Cannot find device "nvmf_tgt_br2" 00:07:49.709 06:38:03 -- nvmf/common.sh@158 -- # true 00:07:49.709 06:38:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:49.709 06:38:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:49.709 06:38:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:49.709 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:49.709 06:38:03 -- nvmf/common.sh@161 -- # true 00:07:49.709 06:38:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:49.709 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:49.709 06:38:03 -- nvmf/common.sh@162 -- # true 00:07:49.709 06:38:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:49.709 06:38:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:49.709 06:38:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:49.709 06:38:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:49.709 06:38:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:49.709 06:38:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:49.968 06:38:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:49.968 06:38:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:49.968 06:38:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:49.968 06:38:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:49.968 06:38:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:49.968 06:38:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:49.968 06:38:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:49.968 06:38:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:49.968 06:38:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:49.968 06:38:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:49.968 06:38:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:49.968 06:38:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:49.968 06:38:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:49.968 06:38:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:49.968 06:38:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:49.968 06:38:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:49.968 06:38:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:49.968 06:38:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:49.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:49.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:07:49.968 00:07:49.968 --- 10.0.0.2 ping statistics --- 00:07:49.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.968 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:07:49.968 06:38:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:49.968 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:49.968 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:07:49.968 00:07:49.968 --- 10.0.0.3 ping statistics --- 00:07:49.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.968 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:07:49.968 06:38:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:49.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:49.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:07:49.968 00:07:49.968 --- 10.0.0.1 ping statistics --- 00:07:49.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.968 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:07:49.968 06:38:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:49.968 06:38:03 -- nvmf/common.sh@421 -- # return 0 00:07:49.968 06:38:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:49.968 06:38:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:49.968 06:38:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:49.968 06:38:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:49.968 06:38:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:49.968 06:38:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:49.968 06:38:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:49.968 06:38:03 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:49.968 06:38:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:49.968 06:38:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:49.968 06:38:03 -- common/autotest_common.sh@10 -- # set +x 00:07:49.968 06:38:03 -- nvmf/common.sh@469 -- # nvmfpid=60363 00:07:49.968 06:38:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:49.968 06:38:03 -- nvmf/common.sh@470 -- # waitforlisten 60363 00:07:49.968 06:38:03 -- common/autotest_common.sh@829 -- # '[' -z 60363 ']' 00:07:49.968 06:38:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.968 06:38:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:49.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.968 06:38:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.968 06:38:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:49.968 06:38:03 -- common/autotest_common.sh@10 -- # set +x 00:07:49.968 [2024-12-14 06:38:03.901245] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:49.968 [2024-12-14 06:38:03.901367] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.228 [2024-12-14 06:38:04.041825] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:50.228 [2024-12-14 06:38:04.110843] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:50.228 [2024-12-14 06:38:04.111051] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:50.228 [2024-12-14 06:38:04.111069] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:50.228 [2024-12-14 06:38:04.111080] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:50.228 [2024-12-14 06:38:04.111274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.228 [2024-12-14 06:38:04.112018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.228 [2024-12-14 06:38:04.112030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.164 06:38:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:51.164 06:38:04 -- common/autotest_common.sh@862 -- # return 0 00:07:51.164 06:38:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:51.164 06:38:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:51.164 06:38:04 -- common/autotest_common.sh@10 -- # set +x 00:07:51.164 06:38:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:51.164 06:38:05 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:51.422 [2024-12-14 06:38:05.267863] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.422 06:38:05 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:51.680 06:38:05 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:51.680 06:38:05 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:51.939 06:38:05 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:51.939 06:38:05 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:52.197 06:38:06 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:52.456 06:38:06 -- target/nvmf_lvol.sh@29 -- # lvs=baf2980a-a3bc-4ea2-a5ce-3a78a41d0c5a 00:07:52.456 06:38:06 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u baf2980a-a3bc-4ea2-a5ce-3a78a41d0c5a lvol 20 00:07:52.715 06:38:06 -- target/nvmf_lvol.sh@32 -- # lvol=44d4e224-291e-4a5f-9bf9-f7d5f9783c17 00:07:52.715 06:38:06 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:52.974 06:38:06 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 44d4e224-291e-4a5f-9bf9-f7d5f9783c17 00:07:53.232 06:38:07 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:53.491 [2024-12-14 06:38:07.288044] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.491 06:38:07 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:53.750 06:38:07 -- target/nvmf_lvol.sh@42 -- # perf_pid=60439 00:07:53.750 06:38:07 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:53.750 06:38:07 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:54.686 06:38:08 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 44d4e224-291e-4a5f-9bf9-f7d5f9783c17 MY_SNAPSHOT 00:07:54.945 06:38:08 -- target/nvmf_lvol.sh@47 -- # snapshot=82449a78-501b-4444-adcb-d82000e4215b 00:07:54.945 06:38:08 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 44d4e224-291e-4a5f-9bf9-f7d5f9783c17 30 00:07:55.203 06:38:09 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 82449a78-501b-4444-adcb-d82000e4215b MY_CLONE 00:07:55.462 06:38:09 -- target/nvmf_lvol.sh@49 -- # clone=d2156568-f7c4-4f26-99ac-a33b755cc0e0 00:07:55.462 06:38:09 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate d2156568-f7c4-4f26-99ac-a33b755cc0e0 00:07:56.030 06:38:09 -- target/nvmf_lvol.sh@53 -- # wait 60439 00:08:04.151 Initializing NVMe Controllers 00:08:04.151 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:04.151 Controller IO queue size 128, less than required. 00:08:04.151 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:04.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:04.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:04.151 Initialization complete. Launching workers. 00:08:04.151 ======================================================== 00:08:04.151 Latency(us) 00:08:04.151 Device Information : IOPS MiB/s Average min max 00:08:04.151 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10385.59 40.57 12334.60 2016.78 52729.95 00:08:04.151 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10504.89 41.03 12189.96 1302.69 48129.55 00:08:04.151 ======================================================== 00:08:04.151 Total : 20890.49 81.60 12261.87 1302.69 52729.95 00:08:04.151 00:08:04.151 06:38:17 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:04.151 06:38:18 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 44d4e224-291e-4a5f-9bf9-f7d5f9783c17 00:08:04.409 06:38:18 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u baf2980a-a3bc-4ea2-a5ce-3a78a41d0c5a 00:08:04.668 06:38:18 -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:04.668 06:38:18 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:04.668 06:38:18 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:04.668 06:38:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:04.668 06:38:18 -- nvmf/common.sh@116 -- # sync 00:08:04.668 06:38:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:04.668 06:38:18 -- nvmf/common.sh@119 -- # set +e 00:08:04.668 06:38:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:04.668 06:38:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:04.668 rmmod nvme_tcp 00:08:04.933 rmmod nvme_fabrics 00:08:04.933 rmmod nvme_keyring 00:08:04.933 06:38:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:04.933 06:38:18 -- nvmf/common.sh@123 -- # set -e 00:08:04.933 06:38:18 -- nvmf/common.sh@124 -- # return 0 00:08:04.933 06:38:18 -- nvmf/common.sh@477 -- # '[' -n 60363 ']' 00:08:04.933 06:38:18 -- nvmf/common.sh@478 -- # killprocess 60363 00:08:04.933 06:38:18 -- common/autotest_common.sh@936 -- # '[' -z 60363 ']' 00:08:04.933 06:38:18 -- common/autotest_common.sh@940 -- # kill -0 60363 00:08:04.933 06:38:18 -- common/autotest_common.sh@941 -- # uname 00:08:04.933 06:38:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:04.933 06:38:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60363 00:08:04.933 06:38:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:04.933 killing process with pid 60363 00:08:04.933 06:38:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:04.933 06:38:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60363' 00:08:04.933 06:38:18 -- common/autotest_common.sh@955 -- # kill 60363 00:08:04.933 06:38:18 -- common/autotest_common.sh@960 -- # wait 60363 00:08:05.206 06:38:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:05.206 06:38:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:05.206 06:38:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:05.206 06:38:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:05.206 06:38:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:05.206 06:38:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.206 06:38:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:05.206 06:38:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.206 06:38:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:05.206 00:08:05.206 real 0m15.682s 00:08:05.206 user 1m4.733s 00:08:05.206 sys 0m4.643s 00:08:05.207 06:38:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:05.207 06:38:18 -- common/autotest_common.sh@10 -- # set +x 00:08:05.207 ************************************ 00:08:05.207 END TEST nvmf_lvol 00:08:05.207 ************************************ 00:08:05.207 06:38:19 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:05.207 06:38:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:05.207 06:38:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:05.207 06:38:19 -- common/autotest_common.sh@10 -- # set +x 00:08:05.207 ************************************ 00:08:05.207 START TEST nvmf_lvs_grow 00:08:05.207 ************************************ 00:08:05.207 06:38:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:05.207 * Looking for test storage... 00:08:05.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:05.207 06:38:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:05.207 06:38:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:05.207 06:38:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:05.207 06:38:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:05.207 06:38:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:05.207 06:38:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:05.207 06:38:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:05.207 06:38:19 -- scripts/common.sh@335 -- # IFS=.-: 00:08:05.207 06:38:19 -- scripts/common.sh@335 -- # read -ra ver1 00:08:05.207 06:38:19 -- scripts/common.sh@336 -- # IFS=.-: 00:08:05.207 06:38:19 -- scripts/common.sh@336 -- # read -ra ver2 00:08:05.207 06:38:19 -- scripts/common.sh@337 -- # local 'op=<' 00:08:05.207 06:38:19 -- scripts/common.sh@339 -- # ver1_l=2 00:08:05.207 06:38:19 -- scripts/common.sh@340 -- # ver2_l=1 00:08:05.207 06:38:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:05.207 06:38:19 -- scripts/common.sh@343 -- # case "$op" in 00:08:05.207 06:38:19 -- scripts/common.sh@344 -- # : 1 00:08:05.207 06:38:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:05.207 06:38:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:05.207 06:38:19 -- scripts/common.sh@364 -- # decimal 1 00:08:05.207 06:38:19 -- scripts/common.sh@352 -- # local d=1 00:08:05.207 06:38:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:05.207 06:38:19 -- scripts/common.sh@354 -- # echo 1 00:08:05.207 06:38:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:05.207 06:38:19 -- scripts/common.sh@365 -- # decimal 2 00:08:05.207 06:38:19 -- scripts/common.sh@352 -- # local d=2 00:08:05.207 06:38:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:05.207 06:38:19 -- scripts/common.sh@354 -- # echo 2 00:08:05.207 06:38:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:05.207 06:38:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:05.207 06:38:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:05.207 06:38:19 -- scripts/common.sh@367 -- # return 0 00:08:05.207 06:38:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:05.207 06:38:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:05.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.207 --rc genhtml_branch_coverage=1 00:08:05.207 --rc genhtml_function_coverage=1 00:08:05.207 --rc genhtml_legend=1 00:08:05.207 --rc geninfo_all_blocks=1 00:08:05.207 --rc geninfo_unexecuted_blocks=1 00:08:05.207 00:08:05.207 ' 00:08:05.207 06:38:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:05.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.207 --rc genhtml_branch_coverage=1 00:08:05.207 --rc genhtml_function_coverage=1 00:08:05.207 --rc genhtml_legend=1 00:08:05.207 --rc geninfo_all_blocks=1 00:08:05.207 --rc geninfo_unexecuted_blocks=1 00:08:05.207 00:08:05.207 ' 00:08:05.207 06:38:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:05.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.207 --rc genhtml_branch_coverage=1 00:08:05.207 --rc genhtml_function_coverage=1 00:08:05.207 --rc genhtml_legend=1 00:08:05.207 --rc geninfo_all_blocks=1 00:08:05.207 --rc geninfo_unexecuted_blocks=1 00:08:05.207 00:08:05.207 ' 00:08:05.207 06:38:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:05.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.207 --rc genhtml_branch_coverage=1 00:08:05.207 --rc genhtml_function_coverage=1 00:08:05.207 --rc genhtml_legend=1 00:08:05.207 --rc geninfo_all_blocks=1 00:08:05.207 --rc geninfo_unexecuted_blocks=1 00:08:05.207 00:08:05.207 ' 00:08:05.207 06:38:19 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:05.466 06:38:19 -- nvmf/common.sh@7 -- # uname -s 00:08:05.466 06:38:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:05.466 06:38:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:05.466 06:38:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:05.466 06:38:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:05.466 06:38:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:05.466 06:38:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:05.466 06:38:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:05.466 06:38:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:05.466 06:38:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:05.466 06:38:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:05.466 06:38:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 00:08:05.466 06:38:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=1897a557-42a7-4044-982a-fbab8b2b3e32 00:08:05.466 06:38:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:05.466 06:38:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:05.466 06:38:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:05.466 06:38:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:05.466 06:38:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.466 06:38:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.466 06:38:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.466 06:38:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.466 06:38:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.466 06:38:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.466 06:38:19 -- paths/export.sh@5 -- # export PATH 00:08:05.466 06:38:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.466 06:38:19 -- nvmf/common.sh@46 -- # : 0 00:08:05.466 06:38:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:05.466 06:38:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:05.466 06:38:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:05.466 06:38:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:05.466 06:38:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:05.466 06:38:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:05.466 06:38:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:05.466 06:38:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:05.466 06:38:19 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:05.466 06:38:19 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:05.466 06:38:19 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:08:05.466 06:38:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:05.466 06:38:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:05.466 06:38:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:05.466 06:38:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:05.466 06:38:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:05.466 06:38:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.466 06:38:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:05.466 06:38:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:05.466 06:38:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:05.466 06:38:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:05.466 06:38:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:05.466 06:38:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:05.466 06:38:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:05.466 06:38:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:05.466 06:38:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:05.466 06:38:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:05.466 06:38:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:05.466 06:38:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:05.466 06:38:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:05.466 06:38:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:05.466 06:38:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:05.466 06:38:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:05.466 06:38:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:05.466 06:38:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:05.466 06:38:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:05.466 06:38:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:05.466 06:38:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:05.466 06:38:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:05.466 Cannot find device "nvmf_tgt_br" 00:08:05.466 06:38:19 -- nvmf/common.sh@154 -- # true 00:08:05.466 06:38:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:05.466 Cannot find device "nvmf_tgt_br2" 00:08:05.466 06:38:19 -- nvmf/common.sh@155 -- # true 00:08:05.466 06:38:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:05.466 06:38:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:05.466 Cannot find device "nvmf_tgt_br" 00:08:05.466 06:38:19 -- nvmf/common.sh@157 -- # true 00:08:05.466 06:38:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:05.466 Cannot find device "nvmf_tgt_br2" 00:08:05.466 06:38:19 -- nvmf/common.sh@158 -- # true 00:08:05.466 06:38:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:05.466 06:38:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:05.466 06:38:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:05.466 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:05.466 06:38:19 -- nvmf/common.sh@161 -- # true 00:08:05.466 06:38:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:05.466 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:05.466 06:38:19 -- nvmf/common.sh@162 -- # true 00:08:05.466 06:38:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:05.466 06:38:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:05.466 06:38:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:05.466 06:38:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:05.466 06:38:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:05.466 06:38:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:05.466 06:38:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:05.466 06:38:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:05.466 06:38:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:05.466 06:38:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:05.466 06:38:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:05.466 06:38:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:05.466 06:38:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:05.466 06:38:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:05.725 06:38:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:05.725 06:38:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:05.725 06:38:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:05.725 06:38:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:05.725 06:38:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:05.725 06:38:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:05.725 06:38:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:05.725 06:38:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:05.725 06:38:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:05.725 06:38:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:05.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:05.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:08:05.725 00:08:05.725 --- 10.0.0.2 ping statistics --- 00:08:05.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.725 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:08:05.725 06:38:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:05.725 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:05.725 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:08:05.725 00:08:05.725 --- 10.0.0.3 ping statistics --- 00:08:05.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.725 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:08:05.725 06:38:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:05.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:05.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:08:05.725 00:08:05.725 --- 10.0.0.1 ping statistics --- 00:08:05.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:05.725 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:08:05.725 06:38:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:05.725 06:38:19 -- nvmf/common.sh@421 -- # return 0 00:08:05.725 06:38:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:05.725 06:38:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:05.725 06:38:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:05.725 06:38:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:05.725 06:38:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:05.725 06:38:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:05.726 06:38:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:05.726 06:38:19 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:08:05.726 06:38:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:05.726 06:38:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:05.726 06:38:19 -- common/autotest_common.sh@10 -- # set +x 00:08:05.726 06:38:19 -- nvmf/common.sh@469 -- # nvmfpid=60770 00:08:05.726 06:38:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:05.726 06:38:19 -- nvmf/common.sh@470 -- # waitforlisten 60770 00:08:05.726 06:38:19 -- common/autotest_common.sh@829 -- # '[' -z 60770 ']' 00:08:05.726 06:38:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.726 06:38:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:05.726 06:38:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.726 06:38:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:05.726 06:38:19 -- common/autotest_common.sh@10 -- # set +x 00:08:05.726 [2024-12-14 06:38:19.654454] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:05.726 [2024-12-14 06:38:19.654617] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.985 [2024-12-14 06:38:19.806600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.985 [2024-12-14 06:38:19.857584] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:05.985 [2024-12-14 06:38:19.857796] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:05.985 [2024-12-14 06:38:19.857810] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:05.985 [2024-12-14 06:38:19.857819] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:05.985 [2024-12-14 06:38:19.857849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.922 06:38:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:06.922 06:38:20 -- common/autotest_common.sh@862 -- # return 0 00:08:06.922 06:38:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:06.922 06:38:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:06.922 06:38:20 -- common/autotest_common.sh@10 -- # set +x 00:08:06.922 06:38:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.922 06:38:20 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:07.185 [2024-12-14 06:38:21.002537] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.185 06:38:21 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:08:07.185 06:38:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:07.185 06:38:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:07.185 06:38:21 -- common/autotest_common.sh@10 -- # set +x 00:08:07.185 ************************************ 00:08:07.185 START TEST lvs_grow_clean 00:08:07.185 ************************************ 00:08:07.185 06:38:21 -- common/autotest_common.sh@1114 -- # lvs_grow 00:08:07.185 06:38:21 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:07.185 06:38:21 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:07.185 06:38:21 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:07.185 06:38:21 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:07.185 06:38:21 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:07.185 06:38:21 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:07.185 06:38:21 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:07.185 06:38:21 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:07.185 06:38:21 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:07.444 06:38:21 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:07.444 06:38:21 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:07.702 06:38:21 -- target/nvmf_lvs_grow.sh@28 -- # lvs=f5f12e25-7637-4d24-8259-f1636fd519c9 00:08:07.702 06:38:21 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:07.702 06:38:21 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5f12e25-7637-4d24-8259-f1636fd519c9 00:08:07.961 06:38:21 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:07.961 06:38:21 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:07.961 06:38:21 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f5f12e25-7637-4d24-8259-f1636fd519c9 lvol 150 00:08:08.220 06:38:22 -- target/nvmf_lvs_grow.sh@33 -- # lvol=da127675-b4cf-4135-bea3-a14379834cc3 00:08:08.220 06:38:22 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:08.220 06:38:22 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:08.479 [2024-12-14 06:38:22.263732] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:08.479 [2024-12-14 06:38:22.263828] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:08.479 true 00:08:08.479 06:38:22 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5f12e25-7637-4d24-8259-f1636fd519c9 00:08:08.479 06:38:22 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:08.739 06:38:22 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:08.739 06:38:22 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:08.739 06:38:22 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 da127675-b4cf-4135-bea3-a14379834cc3 00:08:08.999 06:38:22 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:09.258 [2024-12-14 06:38:23.128268] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.258 06:38:23 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:09.518 06:38:23 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=60852 00:08:09.518 06:38:23 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:09.518 06:38:23 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:09.518 06:38:23 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 60852 /var/tmp/bdevperf.sock 00:08:09.518 06:38:23 -- common/autotest_common.sh@829 -- # '[' -z 60852 ']' 00:08:09.518 06:38:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:09.518 06:38:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:09.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:09.518 06:38:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:09.518 06:38:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:09.518 06:38:23 -- common/autotest_common.sh@10 -- # set +x 00:08:09.518 [2024-12-14 06:38:23.407013] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:09.518 [2024-12-14 06:38:23.407099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60852 ] 00:08:09.777 [2024-12-14 06:38:23.540545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.777 [2024-12-14 06:38:23.590398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.713 06:38:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:10.713 06:38:24 -- common/autotest_common.sh@862 -- # return 0 00:08:10.713 06:38:24 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:10.713 Nvme0n1 00:08:10.713 06:38:24 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:11.286 [ 00:08:11.286 { 00:08:11.286 "name": "Nvme0n1", 00:08:11.286 "aliases": [ 00:08:11.286 "da127675-b4cf-4135-bea3-a14379834cc3" 00:08:11.286 ], 00:08:11.286 "product_name": "NVMe disk", 00:08:11.286 "block_size": 4096, 00:08:11.286 "num_blocks": 38912, 00:08:11.286 "uuid": "da127675-b4cf-4135-bea3-a14379834cc3", 00:08:11.286 "assigned_rate_limits": { 00:08:11.286 "rw_ios_per_sec": 0, 00:08:11.286 "rw_mbytes_per_sec": 0, 00:08:11.286 "r_mbytes_per_sec": 0, 00:08:11.286 "w_mbytes_per_sec": 0 00:08:11.286 }, 00:08:11.286 "claimed": false, 00:08:11.286 "zoned": false, 00:08:11.286 "supported_io_types": { 00:08:11.286 "read": true, 00:08:11.286 "write": true, 00:08:11.286 "unmap": true, 00:08:11.286 "write_zeroes": true, 00:08:11.286 "flush": true, 00:08:11.286 "reset": true, 00:08:11.286 "compare": true, 00:08:11.286 "compare_and_write": true, 00:08:11.286 "abort": true, 00:08:11.286 "nvme_admin": true, 00:08:11.286 "nvme_io": true 00:08:11.286 }, 00:08:11.286 "driver_specific": { 00:08:11.286 "nvme": [ 00:08:11.286 { 00:08:11.286 "trid": { 00:08:11.286 "trtype": "TCP", 00:08:11.286 "adrfam": "IPv4", 00:08:11.286 "traddr": "10.0.0.2", 00:08:11.286 "trsvcid": "4420", 00:08:11.286 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:11.286 }, 00:08:11.286 "ctrlr_data": { 00:08:11.286 "cntlid": 1, 00:08:11.286 "vendor_id": "0x8086", 00:08:11.286 "model_number": "SPDK bdev Controller", 00:08:11.286 "serial_number": "SPDK0", 00:08:11.286 "firmware_revision": "24.01.1", 00:08:11.286 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:11.286 "oacs": { 00:08:11.286 "security": 0, 00:08:11.286 "format": 0, 00:08:11.286 "firmware": 0, 00:08:11.286 "ns_manage": 0 00:08:11.286 }, 00:08:11.286 "multi_ctrlr": true, 00:08:11.286 "ana_reporting": false 00:08:11.286 }, 00:08:11.286 "vs": { 00:08:11.286 "nvme_version": "1.3" 00:08:11.286 }, 00:08:11.286 "ns_data": { 00:08:11.286 "id": 1, 00:08:11.286 "can_share": true 00:08:11.286 } 00:08:11.286 } 00:08:11.286 ], 00:08:11.286 "mp_policy": "active_passive" 00:08:11.286 } 00:08:11.286 } 00:08:11.286 ] 00:08:11.286 06:38:24 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=60881 00:08:11.286 06:38:24 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:11.286 06:38:24 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:11.286 Running I/O for 10 seconds... 00:08:12.223 Latency(us) 00:08:12.223 [2024-12-14T06:38:26.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.223 [2024-12-14T06:38:26.215Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.223 Nvme0n1 : 1.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:08:12.223 [2024-12-14T06:38:26.215Z] =================================================================================================================== 00:08:12.223 [2024-12-14T06:38:26.215Z] Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:08:12.223 00:08:13.159 06:38:26 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f5f12e25-7637-4d24-8259-f1636fd519c9 00:08:13.159 [2024-12-14T06:38:27.151Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.159 Nvme0n1 : 2.00 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:08:13.159 [2024-12-14T06:38:27.151Z] =================================================================================================================== 00:08:13.159 [2024-12-14T06:38:27.151Z] Total : 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:08:13.159 00:08:13.418 true 00:08:13.418 06:38:27 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:13.418 06:38:27 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5f12e25-7637-4d24-8259-f1636fd519c9 00:08:13.677 06:38:27 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:13.677 06:38:27 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:13.677 06:38:27 -- target/nvmf_lvs_grow.sh@65 -- # wait 60881 00:08:14.244 [2024-12-14T06:38:28.236Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.244 Nvme0n1 : 3.00 6773.33 26.46 0.00 0.00 0.00 0.00 0.00 00:08:14.244 [2024-12-14T06:38:28.236Z] =================================================================================================================== 00:08:14.244 [2024-12-14T06:38:28.236Z] Total : 6773.33 26.46 0.00 0.00 0.00 0.00 0.00 00:08:14.244 00:08:15.181 [2024-12-14T06:38:29.173Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.181 Nvme0n1 : 4.00 6685.25 26.11 0.00 0.00 0.00 0.00 0.00 00:08:15.181 [2024-12-14T06:38:29.173Z] =================================================================================================================== 00:08:15.181 [2024-12-14T06:38:29.173Z] Total : 6685.25 26.11 0.00 0.00 0.00 0.00 0.00 00:08:15.181 00:08:16.120 [2024-12-14T06:38:30.112Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.120 Nvme0n1 : 5.00 6643.60 25.95 0.00 0.00 0.00 0.00 0.00 00:08:16.120 [2024-12-14T06:38:30.112Z] =================================================================================================================== 00:08:16.120 [2024-12-14T06:38:30.112Z] Total : 6643.60 25.95 0.00 0.00 0.00 0.00 0.00 00:08:16.120 00:08:17.497 [2024-12-14T06:38:31.489Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.497 Nvme0n1 : 6.00 6637.00 25.93 0.00 0.00 0.00 0.00 0.00 00:08:17.497 [2024-12-14T06:38:31.489Z] =================================================================================================================== 00:08:17.497 [2024-12-14T06:38:31.489Z] Total : 6637.00 25.93 0.00 0.00 0.00 0.00 0.00 00:08:17.497 00:08:18.432 [2024-12-14T06:38:32.424Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.432 Nvme0n1 : 7.00 6650.43 25.98 0.00 0.00 0.00 0.00 0.00 00:08:18.432 [2024-12-14T06:38:32.424Z] =================================================================================================================== 00:08:18.432 [2024-12-14T06:38:32.424Z] Total : 6650.43 25.98 0.00 0.00 0.00 0.00 0.00 00:08:18.432 00:08:19.367 [2024-12-14T06:38:33.359Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.367 Nvme0n1 : 8.00 6644.62 25.96 0.00 0.00 0.00 0.00 0.00 00:08:19.367 [2024-12-14T06:38:33.359Z] =================================================================================================================== 00:08:19.367 [2024-12-14T06:38:33.359Z] Total : 6644.62 25.96 0.00 0.00 0.00 0.00 0.00 00:08:19.367 00:08:20.302 [2024-12-14T06:38:34.294Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.302 Nvme0n1 : 9.00 6626.00 25.88 0.00 0.00 0.00 0.00 0.00 00:08:20.302 [2024-12-14T06:38:34.294Z] =================================================================================================================== 00:08:20.302 [2024-12-14T06:38:34.294Z] Total : 6626.00 25.88 0.00 0.00 0.00 0.00 0.00 00:08:20.302 00:08:21.236 [2024-12-14T06:38:35.228Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.236 Nvme0n1 : 10.00 6623.80 25.87 0.00 0.00 0.00 0.00 0.00 00:08:21.236 [2024-12-14T06:38:35.228Z] =================================================================================================================== 00:08:21.236 [2024-12-14T06:38:35.228Z] Total : 6623.80 25.87 0.00 0.00 0.00 0.00 0.00 00:08:21.236 00:08:21.236 00:08:21.236 Latency(us) 00:08:21.236 [2024-12-14T06:38:35.228Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.236 [2024-12-14T06:38:35.228Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.236 Nvme0n1 : 10.01 6632.55 25.91 0.00 0.00 19293.04 16324.42 66250.94 00:08:21.236 [2024-12-14T06:38:35.228Z] =================================================================================================================== 00:08:21.236 [2024-12-14T06:38:35.228Z] Total : 6632.55 25.91 0.00 0.00 19293.04 16324.42 66250.94 00:08:21.236 0 00:08:21.236 06:38:35 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 60852 00:08:21.236 06:38:35 -- common/autotest_common.sh@936 -- # '[' -z 60852 ']' 00:08:21.236 06:38:35 -- common/autotest_common.sh@940 -- # kill -0 60852 00:08:21.236 06:38:35 -- common/autotest_common.sh@941 -- # uname 00:08:21.236 06:38:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:21.236 06:38:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60852 00:08:21.236 killing process with pid 60852 00:08:21.236 Received shutdown signal, test time was about 10.000000 seconds 00:08:21.236 00:08:21.236 Latency(us) 00:08:21.236 [2024-12-14T06:38:35.228Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.236 [2024-12-14T06:38:35.228Z] =================================================================================================================== 00:08:21.236 [2024-12-14T06:38:35.228Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:21.236 06:38:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:21.236 06:38:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:21.236 06:38:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60852' 00:08:21.236 06:38:35 -- common/autotest_common.sh@955 -- # kill 60852 00:08:21.236 06:38:35 -- common/autotest_common.sh@960 -- # wait 60852 00:08:21.494 06:38:35 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:21.752 06:38:35 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:08:21.752 06:38:35 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5f12e25-7637-4d24-8259-f1636fd519c9 00:08:22.011 06:38:35 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:08:22.011 06:38:35 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:08:22.011 06:38:35 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:22.269 [2024-12-14 06:38:36.098550] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:22.269 06:38:36 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5f12e25-7637-4d24-8259-f1636fd519c9 00:08:22.269 06:38:36 -- common/autotest_common.sh@650 -- # local es=0 00:08:22.269 06:38:36 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5f12e25-7637-4d24-8259-f1636fd519c9 00:08:22.269 06:38:36 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:22.269 06:38:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.269 06:38:36 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:22.269 06:38:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.269 06:38:36 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:22.269 06:38:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.269 06:38:36 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:22.270 06:38:36 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:22.270 06:38:36 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5f12e25-7637-4d24-8259-f1636fd519c9 00:08:22.528 request: 00:08:22.528 { 00:08:22.528 "uuid": "f5f12e25-7637-4d24-8259-f1636fd519c9", 00:08:22.528 "method": "bdev_lvol_get_lvstores", 00:08:22.528 "req_id": 1 00:08:22.528 } 00:08:22.528 Got JSON-RPC error response 00:08:22.528 response: 00:08:22.528 { 00:08:22.528 "code": -19, 00:08:22.528 "message": "No such device" 00:08:22.528 } 00:08:22.528 06:38:36 -- common/autotest_common.sh@653 -- # es=1 00:08:22.528 06:38:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:22.528 06:38:36 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:22.528 06:38:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:22.528 06:38:36 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:22.787 aio_bdev 00:08:22.787 06:38:36 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev da127675-b4cf-4135-bea3-a14379834cc3 00:08:22.787 06:38:36 -- common/autotest_common.sh@897 -- # local bdev_name=da127675-b4cf-4135-bea3-a14379834cc3 00:08:22.787 06:38:36 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:22.787 06:38:36 -- common/autotest_common.sh@899 -- # local i 00:08:22.787 06:38:36 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:22.787 06:38:36 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:22.787 06:38:36 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:23.046 06:38:36 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b da127675-b4cf-4135-bea3-a14379834cc3 -t 2000 00:08:23.306 [ 00:08:23.306 { 00:08:23.306 "name": "da127675-b4cf-4135-bea3-a14379834cc3", 00:08:23.306 "aliases": [ 00:08:23.306 "lvs/lvol" 00:08:23.306 ], 00:08:23.306 "product_name": "Logical Volume", 00:08:23.306 "block_size": 4096, 00:08:23.306 "num_blocks": 38912, 00:08:23.306 "uuid": "da127675-b4cf-4135-bea3-a14379834cc3", 00:08:23.306 "assigned_rate_limits": { 00:08:23.306 "rw_ios_per_sec": 0, 00:08:23.306 "rw_mbytes_per_sec": 0, 00:08:23.306 "r_mbytes_per_sec": 0, 00:08:23.306 "w_mbytes_per_sec": 0 00:08:23.306 }, 00:08:23.306 "claimed": false, 00:08:23.306 "zoned": false, 00:08:23.306 "supported_io_types": { 00:08:23.306 "read": true, 00:08:23.306 "write": true, 00:08:23.306 "unmap": true, 00:08:23.306 "write_zeroes": true, 00:08:23.306 "flush": false, 00:08:23.306 "reset": true, 00:08:23.306 "compare": false, 00:08:23.306 "compare_and_write": false, 00:08:23.306 "abort": false, 00:08:23.306 "nvme_admin": false, 00:08:23.306 "nvme_io": false 00:08:23.306 }, 00:08:23.306 "driver_specific": { 00:08:23.306 "lvol": { 00:08:23.306 "lvol_store_uuid": "f5f12e25-7637-4d24-8259-f1636fd519c9", 00:08:23.306 "base_bdev": "aio_bdev", 00:08:23.306 "thin_provision": false, 00:08:23.306 "snapshot": false, 00:08:23.306 "clone": false, 00:08:23.306 "esnap_clone": false 00:08:23.306 } 00:08:23.306 } 00:08:23.306 } 00:08:23.306 ] 00:08:23.306 06:38:37 -- common/autotest_common.sh@905 -- # return 0 00:08:23.306 06:38:37 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5f12e25-7637-4d24-8259-f1636fd519c9 00:08:23.306 06:38:37 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:08:23.565 06:38:37 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:08:23.565 06:38:37 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f5f12e25-7637-4d24-8259-f1636fd519c9 00:08:23.565 06:38:37 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:08:23.823 06:38:37 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:08:23.823 06:38:37 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete da127675-b4cf-4135-bea3-a14379834cc3 00:08:24.082 06:38:37 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f5f12e25-7637-4d24-8259-f1636fd519c9 00:08:24.341 06:38:38 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:24.599 06:38:38 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:24.858 ************************************ 00:08:24.858 END TEST lvs_grow_clean 00:08:24.858 ************************************ 00:08:24.858 00:08:24.858 real 0m17.706s 00:08:24.858 user 0m16.930s 00:08:24.858 sys 0m2.299s 00:08:24.858 06:38:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:24.858 06:38:38 -- common/autotest_common.sh@10 -- # set +x 00:08:24.858 06:38:38 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:24.858 06:38:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:24.858 06:38:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:24.858 06:38:38 -- common/autotest_common.sh@10 -- # set +x 00:08:24.858 ************************************ 00:08:24.858 START TEST lvs_grow_dirty 00:08:24.858 ************************************ 00:08:24.858 06:38:38 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:08:24.858 06:38:38 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:24.858 06:38:38 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:24.858 06:38:38 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:24.858 06:38:38 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:24.858 06:38:38 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:24.858 06:38:38 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:24.858 06:38:38 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:24.858 06:38:38 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:24.858 06:38:38 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:25.117 06:38:39 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:25.376 06:38:39 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:25.634 06:38:39 -- target/nvmf_lvs_grow.sh@28 -- # lvs=a9f2b2c3-fac4-4c5b-8577-01f675f09109 00:08:25.634 06:38:39 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9f2b2c3-fac4-4c5b-8577-01f675f09109 00:08:25.634 06:38:39 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:25.893 06:38:39 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:25.893 06:38:39 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:25.893 06:38:39 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a9f2b2c3-fac4-4c5b-8577-01f675f09109 lvol 150 00:08:26.180 06:38:39 -- target/nvmf_lvs_grow.sh@33 -- # lvol=ebcd87ec-84d1-4afc-a07a-8ca4f54807ab 00:08:26.180 06:38:39 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:26.180 06:38:39 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:26.180 [2024-12-14 06:38:40.123721] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:26.180 [2024-12-14 06:38:40.123821] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:26.180 true 00:08:26.180 06:38:40 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:26.180 06:38:40 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9f2b2c3-fac4-4c5b-8577-01f675f09109 00:08:26.443 06:38:40 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:26.443 06:38:40 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:26.702 06:38:40 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ebcd87ec-84d1-4afc-a07a-8ca4f54807ab 00:08:26.961 06:38:40 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:27.220 06:38:41 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:27.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:27.479 06:38:41 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=61122 00:08:27.479 06:38:41 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:27.479 06:38:41 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:27.479 06:38:41 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 61122 /var/tmp/bdevperf.sock 00:08:27.479 06:38:41 -- common/autotest_common.sh@829 -- # '[' -z 61122 ']' 00:08:27.479 06:38:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:27.479 06:38:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:27.479 06:38:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:27.479 06:38:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:27.479 06:38:41 -- common/autotest_common.sh@10 -- # set +x 00:08:27.479 [2024-12-14 06:38:41.440872] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:27.479 [2024-12-14 06:38:41.440974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61122 ] 00:08:27.738 [2024-12-14 06:38:41.576962] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.738 [2024-12-14 06:38:41.648836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.673 06:38:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:28.673 06:38:42 -- common/autotest_common.sh@862 -- # return 0 00:08:28.673 06:38:42 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:28.673 Nvme0n1 00:08:28.674 06:38:42 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:28.932 [ 00:08:28.932 { 00:08:28.932 "name": "Nvme0n1", 00:08:28.932 "aliases": [ 00:08:28.932 "ebcd87ec-84d1-4afc-a07a-8ca4f54807ab" 00:08:28.932 ], 00:08:28.932 "product_name": "NVMe disk", 00:08:28.932 "block_size": 4096, 00:08:28.932 "num_blocks": 38912, 00:08:28.932 "uuid": "ebcd87ec-84d1-4afc-a07a-8ca4f54807ab", 00:08:28.932 "assigned_rate_limits": { 00:08:28.932 "rw_ios_per_sec": 0, 00:08:28.932 "rw_mbytes_per_sec": 0, 00:08:28.932 "r_mbytes_per_sec": 0, 00:08:28.932 "w_mbytes_per_sec": 0 00:08:28.932 }, 00:08:28.932 "claimed": false, 00:08:28.932 "zoned": false, 00:08:28.932 "supported_io_types": { 00:08:28.932 "read": true, 00:08:28.932 "write": true, 00:08:28.932 "unmap": true, 00:08:28.932 "write_zeroes": true, 00:08:28.932 "flush": true, 00:08:28.932 "reset": true, 00:08:28.932 "compare": true, 00:08:28.932 "compare_and_write": true, 00:08:28.932 "abort": true, 00:08:28.932 "nvme_admin": true, 00:08:28.932 "nvme_io": true 00:08:28.932 }, 00:08:28.932 "driver_specific": { 00:08:28.932 "nvme": [ 00:08:28.932 { 00:08:28.932 "trid": { 00:08:28.932 "trtype": "TCP", 00:08:28.932 "adrfam": "IPv4", 00:08:28.932 "traddr": "10.0.0.2", 00:08:28.932 "trsvcid": "4420", 00:08:28.932 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:28.932 }, 00:08:28.932 "ctrlr_data": { 00:08:28.932 "cntlid": 1, 00:08:28.932 "vendor_id": "0x8086", 00:08:28.932 "model_number": "SPDK bdev Controller", 00:08:28.932 "serial_number": "SPDK0", 00:08:28.932 "firmware_revision": "24.01.1", 00:08:28.932 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:28.932 "oacs": { 00:08:28.932 "security": 0, 00:08:28.932 "format": 0, 00:08:28.932 "firmware": 0, 00:08:28.932 "ns_manage": 0 00:08:28.932 }, 00:08:28.932 "multi_ctrlr": true, 00:08:28.932 "ana_reporting": false 00:08:28.932 }, 00:08:28.932 "vs": { 00:08:28.932 "nvme_version": "1.3" 00:08:28.932 }, 00:08:28.932 "ns_data": { 00:08:28.932 "id": 1, 00:08:28.932 "can_share": true 00:08:28.932 } 00:08:28.932 } 00:08:28.932 ], 00:08:28.932 "mp_policy": "active_passive" 00:08:28.932 } 00:08:28.932 } 00:08:28.932 ] 00:08:28.932 06:38:42 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=61140 00:08:28.932 06:38:42 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:28.932 06:38:42 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:29.193 Running I/O for 10 seconds... 00:08:30.128 Latency(us) 00:08:30.128 [2024-12-14T06:38:44.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.128 [2024-12-14T06:38:44.120Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.128 Nvme0n1 : 1.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:08:30.128 [2024-12-14T06:38:44.120Z] =================================================================================================================== 00:08:30.129 [2024-12-14T06:38:44.121Z] Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:08:30.129 00:08:31.064 06:38:44 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a9f2b2c3-fac4-4c5b-8577-01f675f09109 00:08:31.064 [2024-12-14T06:38:45.056Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.064 Nvme0n1 : 2.00 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:08:31.064 [2024-12-14T06:38:45.056Z] =================================================================================================================== 00:08:31.064 [2024-12-14T06:38:45.056Z] Total : 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:08:31.064 00:08:31.324 true 00:08:31.324 06:38:45 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9f2b2c3-fac4-4c5b-8577-01f675f09109 00:08:31.324 06:38:45 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:31.581 06:38:45 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:31.581 06:38:45 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:31.581 06:38:45 -- target/nvmf_lvs_grow.sh@65 -- # wait 61140 00:08:32.148 [2024-12-14T06:38:46.140Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.148 Nvme0n1 : 3.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:08:32.148 [2024-12-14T06:38:46.140Z] =================================================================================================================== 00:08:32.148 [2024-12-14T06:38:46.140Z] Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:08:32.148 00:08:33.083 [2024-12-14T06:38:47.075Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.083 Nvme0n1 : 4.00 6826.25 26.67 0.00 0.00 0.00 0.00 0.00 00:08:33.084 [2024-12-14T06:38:47.076Z] =================================================================================================================== 00:08:33.084 [2024-12-14T06:38:47.076Z] Total : 6826.25 26.67 0.00 0.00 0.00 0.00 0.00 00:08:33.084 00:08:34.019 [2024-12-14T06:38:48.011Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.019 Nvme0n1 : 5.00 6807.20 26.59 0.00 0.00 0.00 0.00 0.00 00:08:34.019 [2024-12-14T06:38:48.011Z] =================================================================================================================== 00:08:34.019 [2024-12-14T06:38:48.011Z] Total : 6807.20 26.59 0.00 0.00 0.00 0.00 0.00 00:08:34.019 00:08:35.395 [2024-12-14T06:38:49.387Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.395 Nvme0n1 : 6.00 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:08:35.395 [2024-12-14T06:38:49.387Z] =================================================================================================================== 00:08:35.395 [2024-12-14T06:38:49.387Z] Total : 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:08:35.395 00:08:36.329 [2024-12-14T06:38:50.321Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.329 Nvme0n1 : 7.00 6577.71 25.69 0.00 0.00 0.00 0.00 0.00 00:08:36.329 [2024-12-14T06:38:50.321Z] =================================================================================================================== 00:08:36.329 [2024-12-14T06:38:50.321Z] Total : 6577.71 25.69 0.00 0.00 0.00 0.00 0.00 00:08:36.329 00:08:37.287 [2024-12-14T06:38:51.279Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.287 Nvme0n1 : 8.00 6565.12 25.65 0.00 0.00 0.00 0.00 0.00 00:08:37.287 [2024-12-14T06:38:51.280Z] =================================================================================================================== 00:08:37.288 [2024-12-14T06:38:51.280Z] Total : 6565.12 25.65 0.00 0.00 0.00 0.00 0.00 00:08:37.288 00:08:38.223 [2024-12-14T06:38:52.215Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.223 Nvme0n1 : 9.00 6541.22 25.55 0.00 0.00 0.00 0.00 0.00 00:08:38.223 [2024-12-14T06:38:52.215Z] =================================================================================================================== 00:08:38.223 [2024-12-14T06:38:52.215Z] Total : 6541.22 25.55 0.00 0.00 0.00 0.00 0.00 00:08:38.223 00:08:39.159 [2024-12-14T06:38:53.151Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.159 Nvme0n1 : 10.00 6534.80 25.53 0.00 0.00 0.00 0.00 0.00 00:08:39.159 [2024-12-14T06:38:53.151Z] =================================================================================================================== 00:08:39.159 [2024-12-14T06:38:53.151Z] Total : 6534.80 25.53 0.00 0.00 0.00 0.00 0.00 00:08:39.159 00:08:39.159 00:08:39.159 Latency(us) 00:08:39.159 [2024-12-14T06:38:53.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.159 [2024-12-14T06:38:53.151Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.159 Nvme0n1 : 10.00 6545.63 25.57 0.00 0.00 19550.70 13226.36 268816.76 00:08:39.159 [2024-12-14T06:38:53.151Z] =================================================================================================================== 00:08:39.159 [2024-12-14T06:38:53.151Z] Total : 6545.63 25.57 0.00 0.00 19550.70 13226.36 268816.76 00:08:39.159 0 00:08:39.159 06:38:52 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 61122 00:08:39.159 06:38:52 -- common/autotest_common.sh@936 -- # '[' -z 61122 ']' 00:08:39.159 06:38:52 -- common/autotest_common.sh@940 -- # kill -0 61122 00:08:39.159 06:38:52 -- common/autotest_common.sh@941 -- # uname 00:08:39.159 06:38:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:39.159 06:38:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61122 00:08:39.159 killing process with pid 61122 00:08:39.159 Received shutdown signal, test time was about 10.000000 seconds 00:08:39.159 00:08:39.159 Latency(us) 00:08:39.159 [2024-12-14T06:38:53.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.159 [2024-12-14T06:38:53.151Z] =================================================================================================================== 00:08:39.159 [2024-12-14T06:38:53.151Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:39.159 06:38:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:39.159 06:38:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:39.159 06:38:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61122' 00:08:39.159 06:38:53 -- common/autotest_common.sh@955 -- # kill 61122 00:08:39.159 06:38:53 -- common/autotest_common.sh@960 -- # wait 61122 00:08:39.418 06:38:53 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:39.676 06:38:53 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:08:39.676 06:38:53 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9f2b2c3-fac4-4c5b-8577-01f675f09109 00:08:39.934 06:38:53 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:08:39.934 06:38:53 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:08:39.934 06:38:53 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 60770 00:08:39.934 06:38:53 -- target/nvmf_lvs_grow.sh@74 -- # wait 60770 00:08:39.934 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 60770 Killed "${NVMF_APP[@]}" "$@" 00:08:39.934 06:38:53 -- target/nvmf_lvs_grow.sh@74 -- # true 00:08:39.934 06:38:53 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:08:39.934 06:38:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:39.934 06:38:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:39.934 06:38:53 -- common/autotest_common.sh@10 -- # set +x 00:08:39.934 06:38:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:39.934 06:38:53 -- nvmf/common.sh@469 -- # nvmfpid=61272 00:08:39.934 06:38:53 -- nvmf/common.sh@470 -- # waitforlisten 61272 00:08:39.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.934 06:38:53 -- common/autotest_common.sh@829 -- # '[' -z 61272 ']' 00:08:39.934 06:38:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.934 06:38:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:39.934 06:38:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.934 06:38:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:39.934 06:38:53 -- common/autotest_common.sh@10 -- # set +x 00:08:39.934 [2024-12-14 06:38:53.882461] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:39.934 [2024-12-14 06:38:53.883098] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.192 [2024-12-14 06:38:54.022569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.192 [2024-12-14 06:38:54.072647] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:40.192 [2024-12-14 06:38:54.073113] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.192 [2024-12-14 06:38:54.073137] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.192 [2024-12-14 06:38:54.073146] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.192 [2024-12-14 06:38:54.073181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.128 06:38:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:41.128 06:38:54 -- common/autotest_common.sh@862 -- # return 0 00:08:41.128 06:38:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:41.128 06:38:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:41.128 06:38:54 -- common/autotest_common.sh@10 -- # set +x 00:08:41.128 06:38:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.128 06:38:54 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:41.386 [2024-12-14 06:38:55.194651] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:41.386 [2024-12-14 06:38:55.195088] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:41.386 [2024-12-14 06:38:55.195421] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:41.386 06:38:55 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:08:41.386 06:38:55 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev ebcd87ec-84d1-4afc-a07a-8ca4f54807ab 00:08:41.386 06:38:55 -- common/autotest_common.sh@897 -- # local bdev_name=ebcd87ec-84d1-4afc-a07a-8ca4f54807ab 00:08:41.386 06:38:55 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:41.386 06:38:55 -- common/autotest_common.sh@899 -- # local i 00:08:41.386 06:38:55 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:41.386 06:38:55 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:41.386 06:38:55 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:41.645 06:38:55 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ebcd87ec-84d1-4afc-a07a-8ca4f54807ab -t 2000 00:08:41.904 [ 00:08:41.904 { 00:08:41.904 "name": "ebcd87ec-84d1-4afc-a07a-8ca4f54807ab", 00:08:41.904 "aliases": [ 00:08:41.904 "lvs/lvol" 00:08:41.904 ], 00:08:41.904 "product_name": "Logical Volume", 00:08:41.904 "block_size": 4096, 00:08:41.904 "num_blocks": 38912, 00:08:41.904 "uuid": "ebcd87ec-84d1-4afc-a07a-8ca4f54807ab", 00:08:41.904 "assigned_rate_limits": { 00:08:41.904 "rw_ios_per_sec": 0, 00:08:41.904 "rw_mbytes_per_sec": 0, 00:08:41.904 "r_mbytes_per_sec": 0, 00:08:41.904 "w_mbytes_per_sec": 0 00:08:41.904 }, 00:08:41.904 "claimed": false, 00:08:41.904 "zoned": false, 00:08:41.904 "supported_io_types": { 00:08:41.904 "read": true, 00:08:41.904 "write": true, 00:08:41.904 "unmap": true, 00:08:41.904 "write_zeroes": true, 00:08:41.904 "flush": false, 00:08:41.904 "reset": true, 00:08:41.904 "compare": false, 00:08:41.904 "compare_and_write": false, 00:08:41.904 "abort": false, 00:08:41.904 "nvme_admin": false, 00:08:41.904 "nvme_io": false 00:08:41.904 }, 00:08:41.904 "driver_specific": { 00:08:41.904 "lvol": { 00:08:41.904 "lvol_store_uuid": "a9f2b2c3-fac4-4c5b-8577-01f675f09109", 00:08:41.904 "base_bdev": "aio_bdev", 00:08:41.904 "thin_provision": false, 00:08:41.904 "snapshot": false, 00:08:41.904 "clone": false, 00:08:41.904 "esnap_clone": false 00:08:41.904 } 00:08:41.904 } 00:08:41.904 } 00:08:41.904 ] 00:08:41.904 06:38:55 -- common/autotest_common.sh@905 -- # return 0 00:08:41.904 06:38:55 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9f2b2c3-fac4-4c5b-8577-01f675f09109 00:08:41.904 06:38:55 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:08:42.163 06:38:56 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:08:42.163 06:38:56 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9f2b2c3-fac4-4c5b-8577-01f675f09109 00:08:42.163 06:38:56 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:08:42.421 06:38:56 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:08:42.421 06:38:56 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:42.680 [2024-12-14 06:38:56.500427] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:42.680 06:38:56 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9f2b2c3-fac4-4c5b-8577-01f675f09109 00:08:42.680 06:38:56 -- common/autotest_common.sh@650 -- # local es=0 00:08:42.680 06:38:56 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9f2b2c3-fac4-4c5b-8577-01f675f09109 00:08:42.680 06:38:56 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:42.680 06:38:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.680 06:38:56 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:42.680 06:38:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.680 06:38:56 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:42.680 06:38:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.680 06:38:56 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:42.680 06:38:56 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:42.680 06:38:56 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9f2b2c3-fac4-4c5b-8577-01f675f09109 00:08:42.938 request: 00:08:42.938 { 00:08:42.939 "uuid": "a9f2b2c3-fac4-4c5b-8577-01f675f09109", 00:08:42.939 "method": "bdev_lvol_get_lvstores", 00:08:42.939 "req_id": 1 00:08:42.939 } 00:08:42.939 Got JSON-RPC error response 00:08:42.939 response: 00:08:42.939 { 00:08:42.939 "code": -19, 00:08:42.939 "message": "No such device" 00:08:42.939 } 00:08:42.939 06:38:56 -- common/autotest_common.sh@653 -- # es=1 00:08:42.939 06:38:56 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:42.939 06:38:56 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:42.939 06:38:56 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:42.939 06:38:56 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:43.198 aio_bdev 00:08:43.198 06:38:56 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev ebcd87ec-84d1-4afc-a07a-8ca4f54807ab 00:08:43.198 06:38:56 -- common/autotest_common.sh@897 -- # local bdev_name=ebcd87ec-84d1-4afc-a07a-8ca4f54807ab 00:08:43.198 06:38:56 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:43.198 06:38:56 -- common/autotest_common.sh@899 -- # local i 00:08:43.198 06:38:56 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:43.198 06:38:56 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:43.198 06:38:56 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:43.457 06:38:57 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ebcd87ec-84d1-4afc-a07a-8ca4f54807ab -t 2000 00:08:43.716 [ 00:08:43.716 { 00:08:43.716 "name": "ebcd87ec-84d1-4afc-a07a-8ca4f54807ab", 00:08:43.716 "aliases": [ 00:08:43.716 "lvs/lvol" 00:08:43.716 ], 00:08:43.716 "product_name": "Logical Volume", 00:08:43.716 "block_size": 4096, 00:08:43.716 "num_blocks": 38912, 00:08:43.716 "uuid": "ebcd87ec-84d1-4afc-a07a-8ca4f54807ab", 00:08:43.716 "assigned_rate_limits": { 00:08:43.716 "rw_ios_per_sec": 0, 00:08:43.716 "rw_mbytes_per_sec": 0, 00:08:43.716 "r_mbytes_per_sec": 0, 00:08:43.716 "w_mbytes_per_sec": 0 00:08:43.716 }, 00:08:43.716 "claimed": false, 00:08:43.716 "zoned": false, 00:08:43.716 "supported_io_types": { 00:08:43.716 "read": true, 00:08:43.716 "write": true, 00:08:43.716 "unmap": true, 00:08:43.716 "write_zeroes": true, 00:08:43.716 "flush": false, 00:08:43.716 "reset": true, 00:08:43.716 "compare": false, 00:08:43.716 "compare_and_write": false, 00:08:43.716 "abort": false, 00:08:43.716 "nvme_admin": false, 00:08:43.716 "nvme_io": false 00:08:43.716 }, 00:08:43.716 "driver_specific": { 00:08:43.716 "lvol": { 00:08:43.716 "lvol_store_uuid": "a9f2b2c3-fac4-4c5b-8577-01f675f09109", 00:08:43.716 "base_bdev": "aio_bdev", 00:08:43.716 "thin_provision": false, 00:08:43.716 "snapshot": false, 00:08:43.716 "clone": false, 00:08:43.716 "esnap_clone": false 00:08:43.716 } 00:08:43.716 } 00:08:43.716 } 00:08:43.716 ] 00:08:43.716 06:38:57 -- common/autotest_common.sh@905 -- # return 0 00:08:43.716 06:38:57 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9f2b2c3-fac4-4c5b-8577-01f675f09109 00:08:43.716 06:38:57 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:08:43.716 06:38:57 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:08:43.716 06:38:57 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9f2b2c3-fac4-4c5b-8577-01f675f09109 00:08:43.716 06:38:57 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:08:43.975 06:38:57 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:08:43.975 06:38:57 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ebcd87ec-84d1-4afc-a07a-8ca4f54807ab 00:08:44.234 06:38:58 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a9f2b2c3-fac4-4c5b-8577-01f675f09109 00:08:44.492 06:38:58 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:45.059 06:38:58 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:45.318 ************************************ 00:08:45.318 END TEST lvs_grow_dirty 00:08:45.318 ************************************ 00:08:45.318 00:08:45.318 real 0m20.340s 00:08:45.318 user 0m41.161s 00:08:45.318 sys 0m8.736s 00:08:45.318 06:38:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:45.318 06:38:59 -- common/autotest_common.sh@10 -- # set +x 00:08:45.318 06:38:59 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:45.318 06:38:59 -- common/autotest_common.sh@806 -- # type=--id 00:08:45.318 06:38:59 -- common/autotest_common.sh@807 -- # id=0 00:08:45.318 06:38:59 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:08:45.318 06:38:59 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:45.318 06:38:59 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:08:45.318 06:38:59 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:08:45.318 06:38:59 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:08:45.318 06:38:59 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:45.318 nvmf_trace.0 00:08:45.318 06:38:59 -- common/autotest_common.sh@821 -- # return 0 00:08:45.318 06:38:59 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:45.318 06:38:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:45.318 06:38:59 -- nvmf/common.sh@116 -- # sync 00:08:45.885 06:38:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:45.885 06:38:59 -- nvmf/common.sh@119 -- # set +e 00:08:45.885 06:38:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:45.885 06:38:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:45.885 rmmod nvme_tcp 00:08:45.885 rmmod nvme_fabrics 00:08:45.885 rmmod nvme_keyring 00:08:45.885 06:38:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:45.885 06:38:59 -- nvmf/common.sh@123 -- # set -e 00:08:45.885 06:38:59 -- nvmf/common.sh@124 -- # return 0 00:08:45.885 06:38:59 -- nvmf/common.sh@477 -- # '[' -n 61272 ']' 00:08:45.885 06:38:59 -- nvmf/common.sh@478 -- # killprocess 61272 00:08:45.885 06:38:59 -- common/autotest_common.sh@936 -- # '[' -z 61272 ']' 00:08:45.885 06:38:59 -- common/autotest_common.sh@940 -- # kill -0 61272 00:08:45.885 06:38:59 -- common/autotest_common.sh@941 -- # uname 00:08:45.885 06:38:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:45.885 06:38:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61272 00:08:45.885 killing process with pid 61272 00:08:45.885 06:38:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:45.885 06:38:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:45.885 06:38:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61272' 00:08:45.885 06:38:59 -- common/autotest_common.sh@955 -- # kill 61272 00:08:45.885 06:38:59 -- common/autotest_common.sh@960 -- # wait 61272 00:08:46.144 06:38:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:46.144 06:38:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:46.144 06:38:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:46.144 06:38:59 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:46.144 06:38:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:46.144 06:38:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.144 06:38:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:46.144 06:38:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.144 06:38:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:46.144 ************************************ 00:08:46.144 END TEST nvmf_lvs_grow 00:08:46.144 ************************************ 00:08:46.144 00:08:46.144 real 0m40.951s 00:08:46.144 user 1m5.000s 00:08:46.144 sys 0m11.957s 00:08:46.144 06:38:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:46.144 06:38:59 -- common/autotest_common.sh@10 -- # set +x 00:08:46.144 06:39:00 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:46.144 06:39:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:46.144 06:39:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:46.144 06:39:00 -- common/autotest_common.sh@10 -- # set +x 00:08:46.144 ************************************ 00:08:46.144 START TEST nvmf_bdev_io_wait 00:08:46.144 ************************************ 00:08:46.144 06:39:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:46.144 * Looking for test storage... 00:08:46.144 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:46.144 06:39:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:46.144 06:39:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:46.144 06:39:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:46.403 06:39:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:46.403 06:39:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:46.403 06:39:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:46.403 06:39:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:46.403 06:39:00 -- scripts/common.sh@335 -- # IFS=.-: 00:08:46.403 06:39:00 -- scripts/common.sh@335 -- # read -ra ver1 00:08:46.403 06:39:00 -- scripts/common.sh@336 -- # IFS=.-: 00:08:46.403 06:39:00 -- scripts/common.sh@336 -- # read -ra ver2 00:08:46.403 06:39:00 -- scripts/common.sh@337 -- # local 'op=<' 00:08:46.403 06:39:00 -- scripts/common.sh@339 -- # ver1_l=2 00:08:46.403 06:39:00 -- scripts/common.sh@340 -- # ver2_l=1 00:08:46.403 06:39:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:46.403 06:39:00 -- scripts/common.sh@343 -- # case "$op" in 00:08:46.403 06:39:00 -- scripts/common.sh@344 -- # : 1 00:08:46.403 06:39:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:46.403 06:39:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:46.403 06:39:00 -- scripts/common.sh@364 -- # decimal 1 00:08:46.403 06:39:00 -- scripts/common.sh@352 -- # local d=1 00:08:46.403 06:39:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:46.403 06:39:00 -- scripts/common.sh@354 -- # echo 1 00:08:46.403 06:39:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:46.403 06:39:00 -- scripts/common.sh@365 -- # decimal 2 00:08:46.403 06:39:00 -- scripts/common.sh@352 -- # local d=2 00:08:46.403 06:39:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:46.403 06:39:00 -- scripts/common.sh@354 -- # echo 2 00:08:46.403 06:39:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:46.403 06:39:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:46.403 06:39:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:46.403 06:39:00 -- scripts/common.sh@367 -- # return 0 00:08:46.403 06:39:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:46.403 06:39:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:46.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.403 --rc genhtml_branch_coverage=1 00:08:46.403 --rc genhtml_function_coverage=1 00:08:46.403 --rc genhtml_legend=1 00:08:46.403 --rc geninfo_all_blocks=1 00:08:46.403 --rc geninfo_unexecuted_blocks=1 00:08:46.403 00:08:46.403 ' 00:08:46.403 06:39:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:46.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.403 --rc genhtml_branch_coverage=1 00:08:46.403 --rc genhtml_function_coverage=1 00:08:46.403 --rc genhtml_legend=1 00:08:46.403 --rc geninfo_all_blocks=1 00:08:46.403 --rc geninfo_unexecuted_blocks=1 00:08:46.403 00:08:46.403 ' 00:08:46.403 06:39:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:46.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.403 --rc genhtml_branch_coverage=1 00:08:46.403 --rc genhtml_function_coverage=1 00:08:46.403 --rc genhtml_legend=1 00:08:46.403 --rc geninfo_all_blocks=1 00:08:46.403 --rc geninfo_unexecuted_blocks=1 00:08:46.403 00:08:46.403 ' 00:08:46.403 06:39:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:46.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.403 --rc genhtml_branch_coverage=1 00:08:46.403 --rc genhtml_function_coverage=1 00:08:46.403 --rc genhtml_legend=1 00:08:46.403 --rc geninfo_all_blocks=1 00:08:46.403 --rc geninfo_unexecuted_blocks=1 00:08:46.403 00:08:46.403 ' 00:08:46.403 06:39:00 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:46.403 06:39:00 -- nvmf/common.sh@7 -- # uname -s 00:08:46.404 06:39:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:46.404 06:39:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:46.404 06:39:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:46.404 06:39:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:46.404 06:39:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:46.404 06:39:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:46.404 06:39:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:46.404 06:39:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:46.404 06:39:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:46.404 06:39:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:46.404 06:39:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 00:08:46.404 06:39:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=1897a557-42a7-4044-982a-fbab8b2b3e32 00:08:46.404 06:39:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:46.404 06:39:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:46.404 06:39:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:46.404 06:39:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:46.404 06:39:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.404 06:39:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.404 06:39:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.404 06:39:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.404 06:39:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.404 06:39:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.404 06:39:00 -- paths/export.sh@5 -- # export PATH 00:08:46.404 06:39:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.404 06:39:00 -- nvmf/common.sh@46 -- # : 0 00:08:46.404 06:39:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:46.404 06:39:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:46.404 06:39:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:46.404 06:39:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:46.404 06:39:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:46.404 06:39:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:46.404 06:39:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:46.404 06:39:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:46.404 06:39:00 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:46.404 06:39:00 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:46.404 06:39:00 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:46.404 06:39:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:46.404 06:39:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:46.404 06:39:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:46.404 06:39:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:46.404 06:39:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:46.404 06:39:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.404 06:39:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:46.404 06:39:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.404 06:39:00 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:46.404 06:39:00 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:46.404 06:39:00 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:46.404 06:39:00 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:46.404 06:39:00 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:46.404 06:39:00 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:46.404 06:39:00 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:46.404 06:39:00 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:46.404 06:39:00 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:46.404 06:39:00 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:46.404 06:39:00 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:46.404 06:39:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:46.404 06:39:00 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:46.404 06:39:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:46.404 06:39:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:46.404 06:39:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:46.404 06:39:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:46.404 06:39:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:46.404 06:39:00 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:46.404 06:39:00 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:46.404 Cannot find device "nvmf_tgt_br" 00:08:46.404 06:39:00 -- nvmf/common.sh@154 -- # true 00:08:46.404 06:39:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:46.404 Cannot find device "nvmf_tgt_br2" 00:08:46.404 06:39:00 -- nvmf/common.sh@155 -- # true 00:08:46.404 06:39:00 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:46.404 06:39:00 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:46.404 Cannot find device "nvmf_tgt_br" 00:08:46.404 06:39:00 -- nvmf/common.sh@157 -- # true 00:08:46.404 06:39:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:46.404 Cannot find device "nvmf_tgt_br2" 00:08:46.404 06:39:00 -- nvmf/common.sh@158 -- # true 00:08:46.404 06:39:00 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:46.404 06:39:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:46.663 06:39:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:46.663 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:46.663 06:39:00 -- nvmf/common.sh@161 -- # true 00:08:46.663 06:39:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:46.663 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:46.663 06:39:00 -- nvmf/common.sh@162 -- # true 00:08:46.663 06:39:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:46.663 06:39:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:46.663 06:39:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:46.663 06:39:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:46.663 06:39:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:46.663 06:39:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:46.663 06:39:00 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:46.663 06:39:00 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:46.663 06:39:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:46.663 06:39:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:46.663 06:39:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:46.663 06:39:00 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:46.663 06:39:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:46.663 06:39:00 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:46.663 06:39:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:46.663 06:39:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:46.663 06:39:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:46.663 06:39:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:46.663 06:39:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:46.663 06:39:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:46.663 06:39:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:46.663 06:39:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:46.663 06:39:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:46.663 06:39:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:46.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:46.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:08:46.663 00:08:46.663 --- 10.0.0.2 ping statistics --- 00:08:46.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.663 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:08:46.663 06:39:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:46.663 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:46.663 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:08:46.663 00:08:46.663 --- 10.0.0.3 ping statistics --- 00:08:46.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.663 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:08:46.663 06:39:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:46.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:46.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:08:46.663 00:08:46.663 --- 10.0.0.1 ping statistics --- 00:08:46.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.663 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:08:46.663 06:39:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:46.663 06:39:00 -- nvmf/common.sh@421 -- # return 0 00:08:46.663 06:39:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:46.663 06:39:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:46.663 06:39:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:46.663 06:39:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:46.663 06:39:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:46.663 06:39:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:46.663 06:39:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:46.663 06:39:00 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:46.663 06:39:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:46.663 06:39:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:46.663 06:39:00 -- common/autotest_common.sh@10 -- # set +x 00:08:46.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.663 06:39:00 -- nvmf/common.sh@469 -- # nvmfpid=61606 00:08:46.663 06:39:00 -- nvmf/common.sh@470 -- # waitforlisten 61606 00:08:46.663 06:39:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:46.663 06:39:00 -- common/autotest_common.sh@829 -- # '[' -z 61606 ']' 00:08:46.663 06:39:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.663 06:39:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:46.663 06:39:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.663 06:39:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:46.663 06:39:00 -- common/autotest_common.sh@10 -- # set +x 00:08:46.922 [2024-12-14 06:39:00.672203] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:46.922 [2024-12-14 06:39:00.672315] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.922 [2024-12-14 06:39:00.815989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:46.922 [2024-12-14 06:39:00.886913] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:46.922 [2024-12-14 06:39:00.887320] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.922 [2024-12-14 06:39:00.887473] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.922 [2024-12-14 06:39:00.887623] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.922 [2024-12-14 06:39:00.887823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.922 [2024-12-14 06:39:00.887995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.922 [2024-12-14 06:39:00.888110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:46.922 [2024-12-14 06:39:00.888117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.860 06:39:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:47.860 06:39:01 -- common/autotest_common.sh@862 -- # return 0 00:08:47.861 06:39:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:47.861 06:39:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:47.861 06:39:01 -- common/autotest_common.sh@10 -- # set +x 00:08:47.861 06:39:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.861 06:39:01 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:47.861 06:39:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.861 06:39:01 -- common/autotest_common.sh@10 -- # set +x 00:08:47.861 06:39:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.861 06:39:01 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:47.861 06:39:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.861 06:39:01 -- common/autotest_common.sh@10 -- # set +x 00:08:47.861 06:39:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.861 06:39:01 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:47.861 06:39:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.861 06:39:01 -- common/autotest_common.sh@10 -- # set +x 00:08:47.861 [2024-12-14 06:39:01.785235] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:47.861 06:39:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.861 06:39:01 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:47.861 06:39:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.861 06:39:01 -- common/autotest_common.sh@10 -- # set +x 00:08:47.861 Malloc0 00:08:47.861 06:39:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.861 06:39:01 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:47.861 06:39:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.861 06:39:01 -- common/autotest_common.sh@10 -- # set +x 00:08:47.861 06:39:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.861 06:39:01 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:47.861 06:39:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.861 06:39:01 -- common/autotest_common.sh@10 -- # set +x 00:08:47.861 06:39:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.861 06:39:01 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:47.861 06:39:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.861 06:39:01 -- common/autotest_common.sh@10 -- # set +x 00:08:47.861 [2024-12-14 06:39:01.846217] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:48.120 06:39:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.120 06:39:01 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=61641 00:08:48.120 06:39:01 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:48.120 06:39:01 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:48.120 06:39:01 -- target/bdev_io_wait.sh@30 -- # READ_PID=61643 00:08:48.120 06:39:01 -- nvmf/common.sh@520 -- # config=() 00:08:48.120 06:39:01 -- nvmf/common.sh@520 -- # local subsystem config 00:08:48.120 06:39:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:48.120 06:39:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:48.120 { 00:08:48.120 "params": { 00:08:48.120 "name": "Nvme$subsystem", 00:08:48.120 "trtype": "$TEST_TRANSPORT", 00:08:48.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:48.120 "adrfam": "ipv4", 00:08:48.120 "trsvcid": "$NVMF_PORT", 00:08:48.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:48.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:48.120 "hdgst": ${hdgst:-false}, 00:08:48.120 "ddgst": ${ddgst:-false} 00:08:48.120 }, 00:08:48.120 "method": "bdev_nvme_attach_controller" 00:08:48.120 } 00:08:48.120 EOF 00:08:48.120 )") 00:08:48.120 06:39:01 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:48.120 06:39:01 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:48.120 06:39:01 -- nvmf/common.sh@520 -- # config=() 00:08:48.121 06:39:01 -- nvmf/common.sh@520 -- # local subsystem config 00:08:48.121 06:39:01 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=61646 00:08:48.121 06:39:01 -- nvmf/common.sh@542 -- # cat 00:08:48.121 06:39:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:48.121 06:39:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:48.121 { 00:08:48.121 "params": { 00:08:48.121 "name": "Nvme$subsystem", 00:08:48.121 "trtype": "$TEST_TRANSPORT", 00:08:48.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:48.121 "adrfam": "ipv4", 00:08:48.121 "trsvcid": "$NVMF_PORT", 00:08:48.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:48.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:48.121 "hdgst": ${hdgst:-false}, 00:08:48.121 "ddgst": ${ddgst:-false} 00:08:48.121 }, 00:08:48.121 "method": "bdev_nvme_attach_controller" 00:08:48.121 } 00:08:48.121 EOF 00:08:48.121 )") 00:08:48.121 06:39:01 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:48.121 06:39:01 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=61648 00:08:48.121 06:39:01 -- nvmf/common.sh@542 -- # cat 00:08:48.121 06:39:01 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:48.121 06:39:01 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:48.121 06:39:01 -- nvmf/common.sh@520 -- # config=() 00:08:48.121 06:39:01 -- nvmf/common.sh@520 -- # local subsystem config 00:08:48.121 06:39:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:48.121 06:39:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:48.121 { 00:08:48.121 "params": { 00:08:48.121 "name": "Nvme$subsystem", 00:08:48.121 "trtype": "$TEST_TRANSPORT", 00:08:48.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:48.121 "adrfam": "ipv4", 00:08:48.121 "trsvcid": "$NVMF_PORT", 00:08:48.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:48.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:48.121 "hdgst": ${hdgst:-false}, 00:08:48.121 "ddgst": ${ddgst:-false} 00:08:48.121 }, 00:08:48.121 "method": "bdev_nvme_attach_controller" 00:08:48.121 } 00:08:48.121 EOF 00:08:48.121 )") 00:08:48.121 06:39:01 -- target/bdev_io_wait.sh@35 -- # sync 00:08:48.121 06:39:01 -- nvmf/common.sh@544 -- # jq . 00:08:48.121 06:39:01 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:48.121 06:39:01 -- nvmf/common.sh@520 -- # config=() 00:08:48.121 06:39:01 -- nvmf/common.sh@520 -- # local subsystem config 00:08:48.121 06:39:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:48.121 06:39:01 -- nvmf/common.sh@544 -- # jq . 00:08:48.121 06:39:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:48.121 { 00:08:48.121 "params": { 00:08:48.121 "name": "Nvme$subsystem", 00:08:48.121 "trtype": "$TEST_TRANSPORT", 00:08:48.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:48.121 "adrfam": "ipv4", 00:08:48.121 "trsvcid": "$NVMF_PORT", 00:08:48.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:48.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:48.121 "hdgst": ${hdgst:-false}, 00:08:48.121 "ddgst": ${ddgst:-false} 00:08:48.121 }, 00:08:48.121 "method": "bdev_nvme_attach_controller" 00:08:48.121 } 00:08:48.121 EOF 00:08:48.121 )") 00:08:48.121 06:39:01 -- nvmf/common.sh@542 -- # cat 00:08:48.121 06:39:01 -- nvmf/common.sh@545 -- # IFS=, 00:08:48.121 06:39:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:48.121 "params": { 00:08:48.121 "name": "Nvme1", 00:08:48.121 "trtype": "tcp", 00:08:48.121 "traddr": "10.0.0.2", 00:08:48.121 "adrfam": "ipv4", 00:08:48.121 "trsvcid": "4420", 00:08:48.121 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:48.121 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:48.121 "hdgst": false, 00:08:48.121 "ddgst": false 00:08:48.121 }, 00:08:48.121 "method": "bdev_nvme_attach_controller" 00:08:48.121 }' 00:08:48.121 06:39:01 -- nvmf/common.sh@545 -- # IFS=, 00:08:48.121 06:39:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:48.121 "params": { 00:08:48.121 "name": "Nvme1", 00:08:48.121 "trtype": "tcp", 00:08:48.121 "traddr": "10.0.0.2", 00:08:48.121 "adrfam": "ipv4", 00:08:48.121 "trsvcid": "4420", 00:08:48.121 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:48.121 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:48.121 "hdgst": false, 00:08:48.121 "ddgst": false 00:08:48.121 }, 00:08:48.121 "method": "bdev_nvme_attach_controller" 00:08:48.121 }' 00:08:48.121 06:39:01 -- nvmf/common.sh@544 -- # jq . 00:08:48.121 06:39:01 -- nvmf/common.sh@542 -- # cat 00:08:48.121 06:39:01 -- nvmf/common.sh@545 -- # IFS=, 00:08:48.121 06:39:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:48.121 "params": { 00:08:48.121 "name": "Nvme1", 00:08:48.121 "trtype": "tcp", 00:08:48.121 "traddr": "10.0.0.2", 00:08:48.121 "adrfam": "ipv4", 00:08:48.121 "trsvcid": "4420", 00:08:48.121 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:48.121 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:48.121 "hdgst": false, 00:08:48.121 "ddgst": false 00:08:48.121 }, 00:08:48.121 "method": "bdev_nvme_attach_controller" 00:08:48.121 }' 00:08:48.121 06:39:01 -- nvmf/common.sh@544 -- # jq . 00:08:48.121 06:39:01 -- nvmf/common.sh@545 -- # IFS=, 00:08:48.121 06:39:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:48.121 "params": { 00:08:48.121 "name": "Nvme1", 00:08:48.121 "trtype": "tcp", 00:08:48.121 "traddr": "10.0.0.2", 00:08:48.121 "adrfam": "ipv4", 00:08:48.121 "trsvcid": "4420", 00:08:48.121 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:48.121 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:48.121 "hdgst": false, 00:08:48.121 "ddgst": false 00:08:48.121 }, 00:08:48.121 "method": "bdev_nvme_attach_controller" 00:08:48.121 }' 00:08:48.121 [2024-12-14 06:39:01.908556] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:48.121 [2024-12-14 06:39:01.909396] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:48.121 [2024-12-14 06:39:01.911144] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:48.121 [2024-12-14 06:39:01.911395] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:48.121 06:39:01 -- target/bdev_io_wait.sh@37 -- # wait 61641 00:08:48.121 [2024-12-14 06:39:01.939968] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:48.121 [2024-12-14 06:39:01.940285] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:48.121 [2024-12-14 06:39:01.963839] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:48.121 [2024-12-14 06:39:01.964330] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:48.121 [2024-12-14 06:39:02.089170] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.380 [2024-12-14 06:39:02.135152] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.380 [2024-12-14 06:39:02.143322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:08:48.380 [2024-12-14 06:39:02.179623] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.380 [2024-12-14 06:39:02.188590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:08:48.380 [2024-12-14 06:39:02.222600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.380 [2024-12-14 06:39:02.233501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:08:48.380 Running I/O for 1 seconds... 00:08:48.380 [2024-12-14 06:39:02.276675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:08:48.380 Running I/O for 1 seconds... 00:08:48.380 Running I/O for 1 seconds... 00:08:48.638 Running I/O for 1 seconds... 00:08:49.574 00:08:49.574 Latency(us) 00:08:49.574 [2024-12-14T06:39:03.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.574 [2024-12-14T06:39:03.566Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:49.574 Nvme1n1 : 1.00 167753.54 655.29 0.00 0.00 760.31 336.99 2174.60 00:08:49.574 [2024-12-14T06:39:03.566Z] =================================================================================================================== 00:08:49.574 [2024-12-14T06:39:03.566Z] Total : 167753.54 655.29 0.00 0.00 760.31 336.99 2174.60 00:08:49.574 00:08:49.574 Latency(us) 00:08:49.574 [2024-12-14T06:39:03.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.574 [2024-12-14T06:39:03.566Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:49.574 Nvme1n1 : 1.01 10372.52 40.52 0.00 0.00 12286.17 7983.48 18945.86 00:08:49.574 [2024-12-14T06:39:03.566Z] =================================================================================================================== 00:08:49.574 [2024-12-14T06:39:03.566Z] Total : 10372.52 40.52 0.00 0.00 12286.17 7983.48 18945.86 00:08:49.574 00:08:49.574 Latency(us) 00:08:49.574 [2024-12-14T06:39:03.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.574 [2024-12-14T06:39:03.566Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:49.574 Nvme1n1 : 1.01 8209.53 32.07 0.00 0.00 15519.08 7923.90 26929.34 00:08:49.574 [2024-12-14T06:39:03.566Z] =================================================================================================================== 00:08:49.574 [2024-12-14T06:39:03.566Z] Total : 8209.53 32.07 0.00 0.00 15519.08 7923.90 26929.34 00:08:49.574 00:08:49.574 Latency(us) 00:08:49.574 [2024-12-14T06:39:03.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.574 [2024-12-14T06:39:03.566Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:49.574 Nvme1n1 : 1.01 8555.77 33.42 0.00 0.00 14903.20 6881.28 25856.93 00:08:49.574 [2024-12-14T06:39:03.566Z] =================================================================================================================== 00:08:49.574 [2024-12-14T06:39:03.566Z] Total : 8555.77 33.42 0.00 0.00 14903.20 6881.28 25856.93 00:08:49.574 06:39:03 -- target/bdev_io_wait.sh@38 -- # wait 61643 00:08:49.574 06:39:03 -- target/bdev_io_wait.sh@39 -- # wait 61646 00:08:49.574 06:39:03 -- target/bdev_io_wait.sh@40 -- # wait 61648 00:08:49.832 06:39:03 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:49.832 06:39:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.832 06:39:03 -- common/autotest_common.sh@10 -- # set +x 00:08:49.832 06:39:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.832 06:39:03 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:49.832 06:39:03 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:49.832 06:39:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:49.833 06:39:03 -- nvmf/common.sh@116 -- # sync 00:08:49.833 06:39:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:49.833 06:39:03 -- nvmf/common.sh@119 -- # set +e 00:08:49.833 06:39:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:49.833 06:39:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:49.833 rmmod nvme_tcp 00:08:49.833 rmmod nvme_fabrics 00:08:49.833 rmmod nvme_keyring 00:08:49.833 06:39:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:49.833 06:39:03 -- nvmf/common.sh@123 -- # set -e 00:08:49.833 06:39:03 -- nvmf/common.sh@124 -- # return 0 00:08:49.833 06:39:03 -- nvmf/common.sh@477 -- # '[' -n 61606 ']' 00:08:49.833 06:39:03 -- nvmf/common.sh@478 -- # killprocess 61606 00:08:49.833 06:39:03 -- common/autotest_common.sh@936 -- # '[' -z 61606 ']' 00:08:49.833 06:39:03 -- common/autotest_common.sh@940 -- # kill -0 61606 00:08:49.833 06:39:03 -- common/autotest_common.sh@941 -- # uname 00:08:49.833 06:39:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:49.833 06:39:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61606 00:08:49.833 killing process with pid 61606 00:08:49.833 06:39:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:49.833 06:39:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:49.833 06:39:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61606' 00:08:49.833 06:39:03 -- common/autotest_common.sh@955 -- # kill 61606 00:08:49.833 06:39:03 -- common/autotest_common.sh@960 -- # wait 61606 00:08:50.092 06:39:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:50.092 06:39:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:50.092 06:39:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:50.092 06:39:03 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:50.092 06:39:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:50.092 06:39:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.092 06:39:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:50.092 06:39:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.092 06:39:03 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:50.092 00:08:50.092 real 0m3.907s 00:08:50.092 user 0m16.640s 00:08:50.092 sys 0m1.987s 00:08:50.092 06:39:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:50.092 06:39:03 -- common/autotest_common.sh@10 -- # set +x 00:08:50.092 ************************************ 00:08:50.092 END TEST nvmf_bdev_io_wait 00:08:50.092 ************************************ 00:08:50.092 06:39:03 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:50.092 06:39:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:50.092 06:39:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:50.092 06:39:03 -- common/autotest_common.sh@10 -- # set +x 00:08:50.092 ************************************ 00:08:50.092 START TEST nvmf_queue_depth 00:08:50.092 ************************************ 00:08:50.092 06:39:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:50.092 * Looking for test storage... 00:08:50.092 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:50.092 06:39:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:50.092 06:39:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:50.092 06:39:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:50.352 06:39:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:50.352 06:39:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:50.352 06:39:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:50.352 06:39:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:50.352 06:39:04 -- scripts/common.sh@335 -- # IFS=.-: 00:08:50.352 06:39:04 -- scripts/common.sh@335 -- # read -ra ver1 00:08:50.352 06:39:04 -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.352 06:39:04 -- scripts/common.sh@336 -- # read -ra ver2 00:08:50.352 06:39:04 -- scripts/common.sh@337 -- # local 'op=<' 00:08:50.352 06:39:04 -- scripts/common.sh@339 -- # ver1_l=2 00:08:50.352 06:39:04 -- scripts/common.sh@340 -- # ver2_l=1 00:08:50.352 06:39:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:50.352 06:39:04 -- scripts/common.sh@343 -- # case "$op" in 00:08:50.352 06:39:04 -- scripts/common.sh@344 -- # : 1 00:08:50.352 06:39:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:50.352 06:39:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.352 06:39:04 -- scripts/common.sh@364 -- # decimal 1 00:08:50.352 06:39:04 -- scripts/common.sh@352 -- # local d=1 00:08:50.352 06:39:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.352 06:39:04 -- scripts/common.sh@354 -- # echo 1 00:08:50.352 06:39:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:50.352 06:39:04 -- scripts/common.sh@365 -- # decimal 2 00:08:50.352 06:39:04 -- scripts/common.sh@352 -- # local d=2 00:08:50.352 06:39:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.352 06:39:04 -- scripts/common.sh@354 -- # echo 2 00:08:50.352 06:39:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:50.352 06:39:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:50.352 06:39:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:50.352 06:39:04 -- scripts/common.sh@367 -- # return 0 00:08:50.352 06:39:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.352 06:39:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:50.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.352 --rc genhtml_branch_coverage=1 00:08:50.352 --rc genhtml_function_coverage=1 00:08:50.352 --rc genhtml_legend=1 00:08:50.352 --rc geninfo_all_blocks=1 00:08:50.352 --rc geninfo_unexecuted_blocks=1 00:08:50.352 00:08:50.352 ' 00:08:50.352 06:39:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:50.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.352 --rc genhtml_branch_coverage=1 00:08:50.352 --rc genhtml_function_coverage=1 00:08:50.352 --rc genhtml_legend=1 00:08:50.352 --rc geninfo_all_blocks=1 00:08:50.352 --rc geninfo_unexecuted_blocks=1 00:08:50.352 00:08:50.352 ' 00:08:50.352 06:39:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:50.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.352 --rc genhtml_branch_coverage=1 00:08:50.352 --rc genhtml_function_coverage=1 00:08:50.352 --rc genhtml_legend=1 00:08:50.352 --rc geninfo_all_blocks=1 00:08:50.352 --rc geninfo_unexecuted_blocks=1 00:08:50.352 00:08:50.352 ' 00:08:50.352 06:39:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:50.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.352 --rc genhtml_branch_coverage=1 00:08:50.352 --rc genhtml_function_coverage=1 00:08:50.352 --rc genhtml_legend=1 00:08:50.352 --rc geninfo_all_blocks=1 00:08:50.352 --rc geninfo_unexecuted_blocks=1 00:08:50.352 00:08:50.352 ' 00:08:50.352 06:39:04 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:50.352 06:39:04 -- nvmf/common.sh@7 -- # uname -s 00:08:50.352 06:39:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.352 06:39:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.352 06:39:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.352 06:39:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.352 06:39:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.352 06:39:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.352 06:39:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.352 06:39:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.352 06:39:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.352 06:39:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.352 06:39:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 00:08:50.352 06:39:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=1897a557-42a7-4044-982a-fbab8b2b3e32 00:08:50.352 06:39:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.352 06:39:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.352 06:39:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:50.352 06:39:04 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:50.352 06:39:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.352 06:39:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.352 06:39:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.352 06:39:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.352 06:39:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.352 06:39:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.352 06:39:04 -- paths/export.sh@5 -- # export PATH 00:08:50.353 06:39:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.353 06:39:04 -- nvmf/common.sh@46 -- # : 0 00:08:50.353 06:39:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:50.353 06:39:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:50.353 06:39:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:50.353 06:39:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.353 06:39:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.353 06:39:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:50.353 06:39:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:50.353 06:39:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:50.353 06:39:04 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:50.353 06:39:04 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:50.353 06:39:04 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:50.353 06:39:04 -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:50.353 06:39:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:50.353 06:39:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.353 06:39:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:50.353 06:39:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:50.353 06:39:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:50.353 06:39:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.353 06:39:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:50.353 06:39:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.353 06:39:04 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:50.353 06:39:04 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:50.353 06:39:04 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:50.353 06:39:04 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:50.353 06:39:04 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:50.353 06:39:04 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:50.353 06:39:04 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.353 06:39:04 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:50.353 06:39:04 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:50.353 06:39:04 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:50.353 06:39:04 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:50.353 06:39:04 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:50.353 06:39:04 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:50.353 06:39:04 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.353 06:39:04 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:50.353 06:39:04 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:50.353 06:39:04 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:50.353 06:39:04 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:50.353 06:39:04 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:50.353 06:39:04 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:50.353 Cannot find device "nvmf_tgt_br" 00:08:50.353 06:39:04 -- nvmf/common.sh@154 -- # true 00:08:50.353 06:39:04 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:50.353 Cannot find device "nvmf_tgt_br2" 00:08:50.353 06:39:04 -- nvmf/common.sh@155 -- # true 00:08:50.353 06:39:04 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:50.353 06:39:04 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:50.353 Cannot find device "nvmf_tgt_br" 00:08:50.353 06:39:04 -- nvmf/common.sh@157 -- # true 00:08:50.353 06:39:04 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:50.353 Cannot find device "nvmf_tgt_br2" 00:08:50.353 06:39:04 -- nvmf/common.sh@158 -- # true 00:08:50.353 06:39:04 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:50.353 06:39:04 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:50.353 06:39:04 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:50.353 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:50.612 06:39:04 -- nvmf/common.sh@161 -- # true 00:08:50.612 06:39:04 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:50.612 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:50.612 06:39:04 -- nvmf/common.sh@162 -- # true 00:08:50.612 06:39:04 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:50.612 06:39:04 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:50.612 06:39:04 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:50.612 06:39:04 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:50.612 06:39:04 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:50.612 06:39:04 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:50.612 06:39:04 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:50.612 06:39:04 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:50.612 06:39:04 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:50.612 06:39:04 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:50.612 06:39:04 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:50.612 06:39:04 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:50.612 06:39:04 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:50.612 06:39:04 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:50.612 06:39:04 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:50.612 06:39:04 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:50.612 06:39:04 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:50.612 06:39:04 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:50.612 06:39:04 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:50.612 06:39:04 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:50.612 06:39:04 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:50.612 06:39:04 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:50.612 06:39:04 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:50.612 06:39:04 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:50.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:50.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:08:50.612 00:08:50.612 --- 10.0.0.2 ping statistics --- 00:08:50.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.612 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:50.612 06:39:04 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:50.612 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:50.612 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:08:50.612 00:08:50.612 --- 10.0.0.3 ping statistics --- 00:08:50.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.612 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:08:50.612 06:39:04 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:50.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:50.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:08:50.612 00:08:50.612 --- 10.0.0.1 ping statistics --- 00:08:50.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.612 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:08:50.612 06:39:04 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:50.612 06:39:04 -- nvmf/common.sh@421 -- # return 0 00:08:50.612 06:39:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:50.612 06:39:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:50.612 06:39:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:50.612 06:39:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:50.612 06:39:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:50.612 06:39:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:50.612 06:39:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:50.612 06:39:04 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:50.612 06:39:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:50.612 06:39:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:50.612 06:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:50.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.612 06:39:04 -- nvmf/common.sh@469 -- # nvmfpid=61884 00:08:50.612 06:39:04 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:50.612 06:39:04 -- nvmf/common.sh@470 -- # waitforlisten 61884 00:08:50.612 06:39:04 -- common/autotest_common.sh@829 -- # '[' -z 61884 ']' 00:08:50.612 06:39:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.612 06:39:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:50.612 06:39:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.612 06:39:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:50.612 06:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:50.612 [2024-12-14 06:39:04.599282] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:50.612 [2024-12-14 06:39:04.599365] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.873 [2024-12-14 06:39:04.734574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.873 [2024-12-14 06:39:04.786193] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:50.873 [2024-12-14 06:39:04.786358] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.873 [2024-12-14 06:39:04.786372] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.873 [2024-12-14 06:39:04.786380] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.873 [2024-12-14 06:39:04.786426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.132 06:39:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:51.132 06:39:04 -- common/autotest_common.sh@862 -- # return 0 00:08:51.132 06:39:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:51.132 06:39:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:51.132 06:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:51.132 06:39:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.132 06:39:04 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:51.132 06:39:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.132 06:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:51.132 [2024-12-14 06:39:04.943358] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.132 06:39:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.132 06:39:04 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:51.132 06:39:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.132 06:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:51.132 Malloc0 00:08:51.132 06:39:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.132 06:39:04 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:51.132 06:39:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.132 06:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:51.132 06:39:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.132 06:39:04 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:51.132 06:39:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.132 06:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:51.132 06:39:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.132 06:39:04 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:51.132 06:39:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.132 06:39:04 -- common/autotest_common.sh@10 -- # set +x 00:08:51.132 [2024-12-14 06:39:04.995379] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:51.132 06:39:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.132 06:39:04 -- target/queue_depth.sh@30 -- # bdevperf_pid=61908 00:08:51.133 06:39:05 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:51.133 06:39:05 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:51.133 06:39:05 -- target/queue_depth.sh@33 -- # waitforlisten 61908 /var/tmp/bdevperf.sock 00:08:51.133 06:39:05 -- common/autotest_common.sh@829 -- # '[' -z 61908 ']' 00:08:51.133 06:39:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:51.133 06:39:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:51.133 06:39:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:51.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:51.133 06:39:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:51.133 06:39:05 -- common/autotest_common.sh@10 -- # set +x 00:08:51.133 [2024-12-14 06:39:05.059028] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:51.133 [2024-12-14 06:39:05.059398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61908 ] 00:08:51.391 [2024-12-14 06:39:05.198602] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.391 [2024-12-14 06:39:05.252323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.327 06:39:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:52.327 06:39:06 -- common/autotest_common.sh@862 -- # return 0 00:08:52.327 06:39:06 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:52.327 06:39:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.327 06:39:06 -- common/autotest_common.sh@10 -- # set +x 00:08:52.327 NVMe0n1 00:08:52.327 06:39:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.327 06:39:06 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:52.327 Running I/O for 10 seconds... 00:09:04.537 00:09:04.537 Latency(us) 00:09:04.537 [2024-12-14T06:39:18.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.537 [2024-12-14T06:39:18.529Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:04.537 Verification LBA range: start 0x0 length 0x4000 00:09:04.537 NVMe0n1 : 10.06 15750.58 61.53 0.00 0.00 64768.77 14120.03 54096.99 00:09:04.537 [2024-12-14T06:39:18.529Z] =================================================================================================================== 00:09:04.537 [2024-12-14T06:39:18.529Z] Total : 15750.58 61.53 0.00 0.00 64768.77 14120.03 54096.99 00:09:04.537 0 00:09:04.537 06:39:16 -- target/queue_depth.sh@39 -- # killprocess 61908 00:09:04.537 06:39:16 -- common/autotest_common.sh@936 -- # '[' -z 61908 ']' 00:09:04.537 06:39:16 -- common/autotest_common.sh@940 -- # kill -0 61908 00:09:04.537 06:39:16 -- common/autotest_common.sh@941 -- # uname 00:09:04.537 06:39:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:04.537 06:39:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61908 00:09:04.537 killing process with pid 61908 00:09:04.537 Received shutdown signal, test time was about 10.000000 seconds 00:09:04.537 00:09:04.537 Latency(us) 00:09:04.537 [2024-12-14T06:39:18.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.537 [2024-12-14T06:39:18.530Z] =================================================================================================================== 00:09:04.538 [2024-12-14T06:39:18.530Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:04.538 06:39:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:04.538 06:39:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:04.538 06:39:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61908' 00:09:04.538 06:39:16 -- common/autotest_common.sh@955 -- # kill 61908 00:09:04.538 06:39:16 -- common/autotest_common.sh@960 -- # wait 61908 00:09:04.538 06:39:16 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:04.538 06:39:16 -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:04.538 06:39:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:04.538 06:39:16 -- nvmf/common.sh@116 -- # sync 00:09:04.538 06:39:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:04.538 06:39:16 -- nvmf/common.sh@119 -- # set +e 00:09:04.538 06:39:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:04.538 06:39:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:04.538 rmmod nvme_tcp 00:09:04.538 rmmod nvme_fabrics 00:09:04.538 rmmod nvme_keyring 00:09:04.538 06:39:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:04.538 06:39:16 -- nvmf/common.sh@123 -- # set -e 00:09:04.538 06:39:16 -- nvmf/common.sh@124 -- # return 0 00:09:04.538 06:39:16 -- nvmf/common.sh@477 -- # '[' -n 61884 ']' 00:09:04.538 06:39:16 -- nvmf/common.sh@478 -- # killprocess 61884 00:09:04.538 06:39:16 -- common/autotest_common.sh@936 -- # '[' -z 61884 ']' 00:09:04.538 06:39:16 -- common/autotest_common.sh@940 -- # kill -0 61884 00:09:04.538 06:39:16 -- common/autotest_common.sh@941 -- # uname 00:09:04.538 06:39:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:04.538 06:39:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61884 00:09:04.538 killing process with pid 61884 00:09:04.538 06:39:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:04.538 06:39:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:04.538 06:39:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61884' 00:09:04.538 06:39:16 -- common/autotest_common.sh@955 -- # kill 61884 00:09:04.538 06:39:16 -- common/autotest_common.sh@960 -- # wait 61884 00:09:04.538 06:39:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:04.538 06:39:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:04.538 06:39:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:04.538 06:39:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:04.538 06:39:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:04.538 06:39:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.538 06:39:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:04.538 06:39:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.538 06:39:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:04.538 00:09:04.538 real 0m12.949s 00:09:04.538 user 0m22.971s 00:09:04.538 sys 0m1.964s 00:09:04.538 06:39:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:04.538 06:39:16 -- common/autotest_common.sh@10 -- # set +x 00:09:04.538 ************************************ 00:09:04.538 END TEST nvmf_queue_depth 00:09:04.538 ************************************ 00:09:04.538 06:39:16 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:04.538 06:39:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:04.538 06:39:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:04.538 06:39:16 -- common/autotest_common.sh@10 -- # set +x 00:09:04.538 ************************************ 00:09:04.538 START TEST nvmf_multipath 00:09:04.538 ************************************ 00:09:04.538 06:39:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:04.538 * Looking for test storage... 00:09:04.538 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:04.538 06:39:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:04.538 06:39:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:04.538 06:39:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:04.538 06:39:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:04.538 06:39:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:04.538 06:39:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:04.538 06:39:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:04.538 06:39:17 -- scripts/common.sh@335 -- # IFS=.-: 00:09:04.538 06:39:17 -- scripts/common.sh@335 -- # read -ra ver1 00:09:04.538 06:39:17 -- scripts/common.sh@336 -- # IFS=.-: 00:09:04.538 06:39:17 -- scripts/common.sh@336 -- # read -ra ver2 00:09:04.538 06:39:17 -- scripts/common.sh@337 -- # local 'op=<' 00:09:04.538 06:39:17 -- scripts/common.sh@339 -- # ver1_l=2 00:09:04.538 06:39:17 -- scripts/common.sh@340 -- # ver2_l=1 00:09:04.538 06:39:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:04.538 06:39:17 -- scripts/common.sh@343 -- # case "$op" in 00:09:04.538 06:39:17 -- scripts/common.sh@344 -- # : 1 00:09:04.538 06:39:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:04.538 06:39:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:04.538 06:39:17 -- scripts/common.sh@364 -- # decimal 1 00:09:04.538 06:39:17 -- scripts/common.sh@352 -- # local d=1 00:09:04.538 06:39:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:04.538 06:39:17 -- scripts/common.sh@354 -- # echo 1 00:09:04.538 06:39:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:04.538 06:39:17 -- scripts/common.sh@365 -- # decimal 2 00:09:04.538 06:39:17 -- scripts/common.sh@352 -- # local d=2 00:09:04.538 06:39:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:04.538 06:39:17 -- scripts/common.sh@354 -- # echo 2 00:09:04.538 06:39:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:04.538 06:39:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:04.538 06:39:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:04.538 06:39:17 -- scripts/common.sh@367 -- # return 0 00:09:04.538 06:39:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:04.538 06:39:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:04.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.538 --rc genhtml_branch_coverage=1 00:09:04.538 --rc genhtml_function_coverage=1 00:09:04.538 --rc genhtml_legend=1 00:09:04.538 --rc geninfo_all_blocks=1 00:09:04.538 --rc geninfo_unexecuted_blocks=1 00:09:04.538 00:09:04.538 ' 00:09:04.538 06:39:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:04.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.538 --rc genhtml_branch_coverage=1 00:09:04.538 --rc genhtml_function_coverage=1 00:09:04.538 --rc genhtml_legend=1 00:09:04.538 --rc geninfo_all_blocks=1 00:09:04.538 --rc geninfo_unexecuted_blocks=1 00:09:04.538 00:09:04.538 ' 00:09:04.538 06:39:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:04.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.538 --rc genhtml_branch_coverage=1 00:09:04.538 --rc genhtml_function_coverage=1 00:09:04.538 --rc genhtml_legend=1 00:09:04.538 --rc geninfo_all_blocks=1 00:09:04.538 --rc geninfo_unexecuted_blocks=1 00:09:04.538 00:09:04.538 ' 00:09:04.538 06:39:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:04.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.538 --rc genhtml_branch_coverage=1 00:09:04.538 --rc genhtml_function_coverage=1 00:09:04.538 --rc genhtml_legend=1 00:09:04.538 --rc geninfo_all_blocks=1 00:09:04.538 --rc geninfo_unexecuted_blocks=1 00:09:04.538 00:09:04.538 ' 00:09:04.538 06:39:17 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:04.538 06:39:17 -- nvmf/common.sh@7 -- # uname -s 00:09:04.538 06:39:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:04.538 06:39:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:04.538 06:39:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:04.538 06:39:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:04.538 06:39:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:04.538 06:39:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:04.538 06:39:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:04.538 06:39:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:04.538 06:39:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:04.538 06:39:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:04.538 06:39:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 00:09:04.538 06:39:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=1897a557-42a7-4044-982a-fbab8b2b3e32 00:09:04.538 06:39:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:04.538 06:39:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:04.538 06:39:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:04.538 06:39:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:04.538 06:39:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.538 06:39:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.538 06:39:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.538 06:39:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.538 06:39:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.539 06:39:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.539 06:39:17 -- paths/export.sh@5 -- # export PATH 00:09:04.539 06:39:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.539 06:39:17 -- nvmf/common.sh@46 -- # : 0 00:09:04.539 06:39:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:04.539 06:39:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:04.539 06:39:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:04.539 06:39:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:04.539 06:39:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:04.539 06:39:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:04.539 06:39:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:04.539 06:39:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:04.539 06:39:17 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:04.539 06:39:17 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:04.539 06:39:17 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:04.539 06:39:17 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:04.539 06:39:17 -- target/multipath.sh@43 -- # nvmftestinit 00:09:04.539 06:39:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:04.539 06:39:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:04.539 06:39:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:04.539 06:39:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:04.539 06:39:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:04.539 06:39:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.539 06:39:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:04.539 06:39:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.539 06:39:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:04.539 06:39:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:04.539 06:39:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:04.539 06:39:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:04.539 06:39:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:04.539 06:39:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:04.539 06:39:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:04.539 06:39:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:04.539 06:39:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:04.539 06:39:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:04.539 06:39:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:04.539 06:39:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:04.539 06:39:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:04.539 06:39:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:04.539 06:39:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:04.539 06:39:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:04.539 06:39:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:04.539 06:39:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:04.539 06:39:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:04.539 06:39:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:04.539 Cannot find device "nvmf_tgt_br" 00:09:04.539 06:39:17 -- nvmf/common.sh@154 -- # true 00:09:04.539 06:39:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:04.539 Cannot find device "nvmf_tgt_br2" 00:09:04.539 06:39:17 -- nvmf/common.sh@155 -- # true 00:09:04.539 06:39:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:04.539 06:39:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:04.539 Cannot find device "nvmf_tgt_br" 00:09:04.539 06:39:17 -- nvmf/common.sh@157 -- # true 00:09:04.539 06:39:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:04.539 Cannot find device "nvmf_tgt_br2" 00:09:04.539 06:39:17 -- nvmf/common.sh@158 -- # true 00:09:04.539 06:39:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:04.539 06:39:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:04.539 06:39:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:04.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:04.539 06:39:17 -- nvmf/common.sh@161 -- # true 00:09:04.539 06:39:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:04.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:04.539 06:39:17 -- nvmf/common.sh@162 -- # true 00:09:04.539 06:39:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:04.539 06:39:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:04.539 06:39:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:04.539 06:39:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:04.539 06:39:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:04.539 06:39:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:04.539 06:39:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:04.539 06:39:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:04.539 06:39:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:04.539 06:39:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:04.539 06:39:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:04.539 06:39:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:04.539 06:39:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:04.539 06:39:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:04.539 06:39:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:04.539 06:39:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:04.539 06:39:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:04.539 06:39:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:04.539 06:39:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:04.539 06:39:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:04.539 06:39:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:04.539 06:39:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:04.539 06:39:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:04.539 06:39:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:04.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:04.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:09:04.539 00:09:04.539 --- 10.0.0.2 ping statistics --- 00:09:04.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.539 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:09:04.539 06:39:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:04.539 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:04.539 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:09:04.539 00:09:04.539 --- 10.0.0.3 ping statistics --- 00:09:04.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.539 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:04.539 06:39:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:04.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:04.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:04.539 00:09:04.539 --- 10.0.0.1 ping statistics --- 00:09:04.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.539 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:04.539 06:39:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:04.539 06:39:17 -- nvmf/common.sh@421 -- # return 0 00:09:04.539 06:39:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:04.539 06:39:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:04.539 06:39:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:04.539 06:39:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:04.539 06:39:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:04.539 06:39:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:04.539 06:39:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:04.539 06:39:17 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:09:04.539 06:39:17 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:04.539 06:39:17 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:04.539 06:39:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:04.539 06:39:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:04.539 06:39:17 -- common/autotest_common.sh@10 -- # set +x 00:09:04.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.539 06:39:17 -- nvmf/common.sh@469 -- # nvmfpid=62232 00:09:04.539 06:39:17 -- nvmf/common.sh@470 -- # waitforlisten 62232 00:09:04.539 06:39:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:04.539 06:39:17 -- common/autotest_common.sh@829 -- # '[' -z 62232 ']' 00:09:04.539 06:39:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.539 06:39:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:04.539 06:39:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.539 06:39:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:04.539 06:39:17 -- common/autotest_common.sh@10 -- # set +x 00:09:04.539 [2024-12-14 06:39:17.585061] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:04.539 [2024-12-14 06:39:17.585161] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.539 [2024-12-14 06:39:17.726986] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:04.540 [2024-12-14 06:39:17.800318] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:04.540 [2024-12-14 06:39:17.800751] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:04.540 [2024-12-14 06:39:17.800983] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:04.540 [2024-12-14 06:39:17.801154] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:04.540 [2024-12-14 06:39:17.801423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.540 [2024-12-14 06:39:17.801517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:04.540 [2024-12-14 06:39:17.801692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:04.540 [2024-12-14 06:39:17.801701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.798 06:39:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:04.798 06:39:18 -- common/autotest_common.sh@862 -- # return 0 00:09:04.798 06:39:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:04.799 06:39:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:04.799 06:39:18 -- common/autotest_common.sh@10 -- # set +x 00:09:04.799 06:39:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:04.799 06:39:18 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:05.057 [2024-12-14 06:39:18.891426] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.057 06:39:18 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:05.316 Malloc0 00:09:05.316 06:39:19 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:05.575 06:39:19 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:05.835 06:39:19 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:06.093 [2024-12-14 06:39:19.893228] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:06.094 06:39:19 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:06.352 [2024-12-14 06:39:20.169511] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:06.352 06:39:20 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 --hostid=1897a557-42a7-4044-982a-fbab8b2b3e32 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:09:06.352 06:39:20 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 --hostid=1897a557-42a7-4044-982a-fbab8b2b3e32 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:06.611 06:39:20 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:06.611 06:39:20 -- common/autotest_common.sh@1187 -- # local i=0 00:09:06.611 06:39:20 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:09:06.611 06:39:20 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:09:06.611 06:39:20 -- common/autotest_common.sh@1194 -- # sleep 2 00:09:08.517 06:39:22 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:09:08.517 06:39:22 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:09:08.517 06:39:22 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:09:08.517 06:39:22 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:09:08.517 06:39:22 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:09:08.517 06:39:22 -- common/autotest_common.sh@1197 -- # return 0 00:09:08.517 06:39:22 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:08.517 06:39:22 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:08.517 06:39:22 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:08.517 06:39:22 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:08.517 06:39:22 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:08.517 06:39:22 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:08.517 06:39:22 -- target/multipath.sh@38 -- # return 0 00:09:08.517 06:39:22 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:08.517 06:39:22 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:08.517 06:39:22 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:08.517 06:39:22 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:08.517 06:39:22 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:08.517 06:39:22 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:08.517 06:39:22 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:08.517 06:39:22 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:08.517 06:39:22 -- target/multipath.sh@22 -- # local timeout=20 00:09:08.517 06:39:22 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:08.517 06:39:22 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:08.517 06:39:22 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:08.517 06:39:22 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:08.517 06:39:22 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:08.517 06:39:22 -- target/multipath.sh@22 -- # local timeout=20 00:09:08.517 06:39:22 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:08.517 06:39:22 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:08.517 06:39:22 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:08.517 06:39:22 -- target/multipath.sh@85 -- # echo numa 00:09:08.517 06:39:22 -- target/multipath.sh@88 -- # fio_pid=62327 00:09:08.517 06:39:22 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:08.517 06:39:22 -- target/multipath.sh@90 -- # sleep 1 00:09:08.517 [global] 00:09:08.517 thread=1 00:09:08.517 invalidate=1 00:09:08.517 rw=randrw 00:09:08.517 time_based=1 00:09:08.517 runtime=6 00:09:08.517 ioengine=libaio 00:09:08.517 direct=1 00:09:08.517 bs=4096 00:09:08.517 iodepth=128 00:09:08.517 norandommap=0 00:09:08.517 numjobs=1 00:09:08.517 00:09:08.777 verify_dump=1 00:09:08.777 verify_backlog=512 00:09:08.777 verify_state_save=0 00:09:08.777 do_verify=1 00:09:08.777 verify=crc32c-intel 00:09:08.777 [job0] 00:09:08.777 filename=/dev/nvme0n1 00:09:08.777 Could not set queue depth (nvme0n1) 00:09:08.777 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:08.777 fio-3.35 00:09:08.777 Starting 1 thread 00:09:09.715 06:39:23 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:09.975 06:39:23 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:10.234 06:39:24 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:10.234 06:39:24 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:10.234 06:39:24 -- target/multipath.sh@22 -- # local timeout=20 00:09:10.234 06:39:24 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:10.234 06:39:24 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:10.234 06:39:24 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:10.234 06:39:24 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:10.234 06:39:24 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:10.234 06:39:24 -- target/multipath.sh@22 -- # local timeout=20 00:09:10.234 06:39:24 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:10.234 06:39:24 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:10.234 06:39:24 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:10.234 06:39:24 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:10.493 06:39:24 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:10.753 06:39:24 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:10.753 06:39:24 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:10.753 06:39:24 -- target/multipath.sh@22 -- # local timeout=20 00:09:10.753 06:39:24 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:10.753 06:39:24 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:10.753 06:39:24 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:10.753 06:39:24 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:10.753 06:39:24 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:10.753 06:39:24 -- target/multipath.sh@22 -- # local timeout=20 00:09:10.753 06:39:24 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:10.753 06:39:24 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:10.753 06:39:24 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:10.753 06:39:24 -- target/multipath.sh@104 -- # wait 62327 00:09:14.943 00:09:14.943 job0: (groupid=0, jobs=1): err= 0: pid=62348: Sat Dec 14 06:39:28 2024 00:09:14.943 read: IOPS=11.3k, BW=44.0MiB/s (46.2MB/s)(265MiB/6007msec) 00:09:14.943 slat (usec): min=4, max=6027, avg=52.06, stdev=217.38 00:09:14.943 clat (usec): min=1384, max=16633, avg=7708.44, stdev=1430.10 00:09:14.943 lat (usec): min=1394, max=16645, avg=7760.50, stdev=1435.03 00:09:14.943 clat percentiles (usec): 00:09:14.943 | 1.00th=[ 4047], 5.00th=[ 5800], 10.00th=[ 6390], 20.00th=[ 6849], 00:09:14.943 | 30.00th=[ 7111], 40.00th=[ 7308], 50.00th=[ 7570], 60.00th=[ 7767], 00:09:14.943 | 70.00th=[ 8029], 80.00th=[ 8455], 90.00th=[ 9110], 95.00th=[10814], 00:09:14.943 | 99.00th=[12256], 99.50th=[12649], 99.90th=[14353], 99.95th=[15008], 00:09:14.943 | 99.99th=[15926] 00:09:14.943 bw ( KiB/s): min=10072, max=31176, per=51.92%, avg=23411.75, stdev=6533.79, samples=12 00:09:14.943 iops : min= 2518, max= 7794, avg=5852.92, stdev=1633.44, samples=12 00:09:14.943 write: IOPS=6623, BW=25.9MiB/s (27.1MB/s)(138MiB/5315msec); 0 zone resets 00:09:14.943 slat (usec): min=15, max=3140, avg=60.45, stdev=147.65 00:09:14.943 clat (usec): min=1679, max=16616, avg=6819.59, stdev=1262.51 00:09:14.943 lat (usec): min=1702, max=16643, avg=6880.04, stdev=1267.92 00:09:14.943 clat percentiles (usec): 00:09:14.943 | 1.00th=[ 3097], 5.00th=[ 4080], 10.00th=[ 5407], 20.00th=[ 6259], 00:09:14.943 | 30.00th=[ 6521], 40.00th=[ 6783], 50.00th=[ 6915], 60.00th=[ 7111], 00:09:14.943 | 70.00th=[ 7308], 80.00th=[ 7570], 90.00th=[ 7963], 95.00th=[ 8455], 00:09:14.943 | 99.00th=[10552], 99.50th=[11076], 99.90th=[12387], 99.95th=[13042], 00:09:14.943 | 99.99th=[14746] 00:09:14.943 bw ( KiB/s): min=10224, max=30576, per=88.43%, avg=23429.58, stdev=6325.54, samples=12 00:09:14.943 iops : min= 2556, max= 7644, avg=5857.33, stdev=1581.36, samples=12 00:09:14.943 lat (msec) : 2=0.02%, 4=2.18%, 10=92.41%, 20=5.38% 00:09:14.943 cpu : usr=6.01%, sys=21.91%, ctx=5840, majf=0, minf=108 00:09:14.943 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:14.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:14.943 issued rwts: total=67715,35205,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.944 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:14.944 00:09:14.944 Run status group 0 (all jobs): 00:09:14.944 READ: bw=44.0MiB/s (46.2MB/s), 44.0MiB/s-44.0MiB/s (46.2MB/s-46.2MB/s), io=265MiB (277MB), run=6007-6007msec 00:09:14.944 WRITE: bw=25.9MiB/s (27.1MB/s), 25.9MiB/s-25.9MiB/s (27.1MB/s-27.1MB/s), io=138MiB (144MB), run=5315-5315msec 00:09:14.944 00:09:14.944 Disk stats (read/write): 00:09:14.944 nvme0n1: ios=66603/34566, merge=0/0, ticks=490996/220588, in_queue=711584, util=98.60% 00:09:14.944 06:39:28 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:09:15.203 06:39:29 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:15.461 06:39:29 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:15.461 06:39:29 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:15.461 06:39:29 -- target/multipath.sh@22 -- # local timeout=20 00:09:15.461 06:39:29 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:15.461 06:39:29 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:15.461 06:39:29 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:15.461 06:39:29 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:15.461 06:39:29 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:15.461 06:39:29 -- target/multipath.sh@22 -- # local timeout=20 00:09:15.461 06:39:29 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:15.461 06:39:29 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:15.461 06:39:29 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:15.461 06:39:29 -- target/multipath.sh@113 -- # echo round-robin 00:09:15.461 06:39:29 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:15.461 06:39:29 -- target/multipath.sh@116 -- # fio_pid=62429 00:09:15.461 06:39:29 -- target/multipath.sh@118 -- # sleep 1 00:09:15.461 [global] 00:09:15.461 thread=1 00:09:15.461 invalidate=1 00:09:15.461 rw=randrw 00:09:15.461 time_based=1 00:09:15.461 runtime=6 00:09:15.461 ioengine=libaio 00:09:15.461 direct=1 00:09:15.461 bs=4096 00:09:15.461 iodepth=128 00:09:15.461 norandommap=0 00:09:15.461 numjobs=1 00:09:15.461 00:09:15.461 verify_dump=1 00:09:15.461 verify_backlog=512 00:09:15.461 verify_state_save=0 00:09:15.461 do_verify=1 00:09:15.461 verify=crc32c-intel 00:09:15.461 [job0] 00:09:15.461 filename=/dev/nvme0n1 00:09:15.461 Could not set queue depth (nvme0n1) 00:09:15.720 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:15.720 fio-3.35 00:09:15.720 Starting 1 thread 00:09:16.657 06:39:30 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:16.916 06:39:30 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:17.175 06:39:30 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:17.175 06:39:30 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:17.175 06:39:30 -- target/multipath.sh@22 -- # local timeout=20 00:09:17.175 06:39:30 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:17.175 06:39:30 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:17.175 06:39:30 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:17.175 06:39:30 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:17.175 06:39:30 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:17.175 06:39:30 -- target/multipath.sh@22 -- # local timeout=20 00:09:17.175 06:39:30 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:17.175 06:39:30 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:17.175 06:39:30 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:17.175 06:39:30 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:17.434 06:39:31 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:17.693 06:39:31 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:17.693 06:39:31 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:17.693 06:39:31 -- target/multipath.sh@22 -- # local timeout=20 00:09:17.693 06:39:31 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:17.693 06:39:31 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:17.693 06:39:31 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:17.693 06:39:31 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:17.693 06:39:31 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:17.693 06:39:31 -- target/multipath.sh@22 -- # local timeout=20 00:09:17.693 06:39:31 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:17.693 06:39:31 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:17.693 06:39:31 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:17.693 06:39:31 -- target/multipath.sh@132 -- # wait 62429 00:09:21.882 00:09:21.882 job0: (groupid=0, jobs=1): err= 0: pid=62450: Sat Dec 14 06:39:35 2024 00:09:21.882 read: IOPS=12.2k, BW=47.7MiB/s (50.0MB/s)(286MiB/6002msec) 00:09:21.882 slat (usec): min=3, max=5931, avg=41.85, stdev=194.40 00:09:21.882 clat (usec): min=503, max=14584, avg=7217.15, stdev=1716.66 00:09:21.882 lat (usec): min=524, max=14593, avg=7259.00, stdev=1731.40 00:09:21.882 clat percentiles (usec): 00:09:21.882 | 1.00th=[ 3326], 5.00th=[ 4178], 10.00th=[ 4752], 20.00th=[ 5735], 00:09:21.882 | 30.00th=[ 6652], 40.00th=[ 7111], 50.00th=[ 7439], 60.00th=[ 7767], 00:09:21.882 | 70.00th=[ 8029], 80.00th=[ 8356], 90.00th=[ 8848], 95.00th=[ 9634], 00:09:21.882 | 99.00th=[12125], 99.50th=[12387], 99.90th=[13042], 99.95th=[13042], 00:09:21.882 | 99.99th=[13829] 00:09:21.882 bw ( KiB/s): min= 8408, max=39904, per=53.49%, avg=26107.64, stdev=8110.46, samples=11 00:09:21.882 iops : min= 2102, max= 9976, avg=6526.91, stdev=2027.62, samples=11 00:09:21.882 write: IOPS=7229, BW=28.2MiB/s (29.6MB/s)(148MiB/5239msec); 0 zone resets 00:09:21.882 slat (usec): min=5, max=1809, avg=52.39, stdev=130.27 00:09:21.882 clat (usec): min=268, max=13798, avg=6155.39, stdev=1732.39 00:09:21.882 lat (usec): min=318, max=13821, avg=6207.78, stdev=1747.00 00:09:21.882 clat percentiles (usec): 00:09:21.882 | 1.00th=[ 2606], 5.00th=[ 3163], 10.00th=[ 3556], 20.00th=[ 4178], 00:09:21.882 | 30.00th=[ 4948], 40.00th=[ 6325], 50.00th=[ 6783], 60.00th=[ 7111], 00:09:21.882 | 70.00th=[ 7308], 80.00th=[ 7570], 90.00th=[ 7898], 95.00th=[ 8160], 00:09:21.882 | 99.00th=[ 9896], 99.50th=[10945], 99.90th=[11863], 99.95th=[12125], 00:09:21.882 | 99.99th=[13042] 00:09:21.882 bw ( KiB/s): min= 8888, max=39024, per=90.10%, avg=26056.00, stdev=7875.06, samples=11 00:09:21.882 iops : min= 2222, max= 9756, avg=6514.00, stdev=1968.77, samples=11 00:09:21.883 lat (usec) : 500=0.01%, 750=0.01% 00:09:21.883 lat (msec) : 2=0.08%, 4=8.24%, 10=88.30%, 20=3.38% 00:09:21.883 cpu : usr=6.03%, sys=23.48%, ctx=5969, majf=0, minf=114 00:09:21.883 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:21.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.883 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:21.883 issued rwts: total=73234,37875,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.883 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:21.883 00:09:21.883 Run status group 0 (all jobs): 00:09:21.883 READ: bw=47.7MiB/s (50.0MB/s), 47.7MiB/s-47.7MiB/s (50.0MB/s-50.0MB/s), io=286MiB (300MB), run=6002-6002msec 00:09:21.883 WRITE: bw=28.2MiB/s (29.6MB/s), 28.2MiB/s-28.2MiB/s (29.6MB/s-29.6MB/s), io=148MiB (155MB), run=5239-5239msec 00:09:21.883 00:09:21.883 Disk stats (read/write): 00:09:21.883 nvme0n1: ios=71703/37875, merge=0/0, ticks=490124/215851, in_queue=705975, util=98.61% 00:09:21.883 06:39:35 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:21.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:21.883 06:39:35 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:21.883 06:39:35 -- common/autotest_common.sh@1208 -- # local i=0 00:09:21.883 06:39:35 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:21.883 06:39:35 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:09:21.883 06:39:35 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:21.883 06:39:35 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:09:21.883 06:39:35 -- common/autotest_common.sh@1220 -- # return 0 00:09:21.883 06:39:35 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:22.141 06:39:36 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:22.141 06:39:36 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:22.141 06:39:36 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:22.141 06:39:36 -- target/multipath.sh@144 -- # nvmftestfini 00:09:22.141 06:39:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:22.141 06:39:36 -- nvmf/common.sh@116 -- # sync 00:09:22.141 06:39:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:22.141 06:39:36 -- nvmf/common.sh@119 -- # set +e 00:09:22.141 06:39:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:22.141 06:39:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:22.141 rmmod nvme_tcp 00:09:22.399 rmmod nvme_fabrics 00:09:22.399 rmmod nvme_keyring 00:09:22.399 06:39:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:22.399 06:39:36 -- nvmf/common.sh@123 -- # set -e 00:09:22.399 06:39:36 -- nvmf/common.sh@124 -- # return 0 00:09:22.399 06:39:36 -- nvmf/common.sh@477 -- # '[' -n 62232 ']' 00:09:22.400 06:39:36 -- nvmf/common.sh@478 -- # killprocess 62232 00:09:22.400 06:39:36 -- common/autotest_common.sh@936 -- # '[' -z 62232 ']' 00:09:22.400 06:39:36 -- common/autotest_common.sh@940 -- # kill -0 62232 00:09:22.400 06:39:36 -- common/autotest_common.sh@941 -- # uname 00:09:22.400 06:39:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:22.400 06:39:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62232 00:09:22.400 killing process with pid 62232 00:09:22.400 06:39:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:22.400 06:39:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:22.400 06:39:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62232' 00:09:22.400 06:39:36 -- common/autotest_common.sh@955 -- # kill 62232 00:09:22.400 06:39:36 -- common/autotest_common.sh@960 -- # wait 62232 00:09:22.658 06:39:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:22.658 06:39:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:22.658 06:39:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:22.658 06:39:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:22.658 06:39:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:22.658 06:39:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.658 06:39:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:22.658 06:39:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.658 06:39:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:22.658 ************************************ 00:09:22.658 END TEST nvmf_multipath 00:09:22.658 ************************************ 00:09:22.658 00:09:22.658 real 0m19.444s 00:09:22.658 user 1m12.845s 00:09:22.658 sys 0m10.085s 00:09:22.658 06:39:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:22.658 06:39:36 -- common/autotest_common.sh@10 -- # set +x 00:09:22.658 06:39:36 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:22.658 06:39:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:22.658 06:39:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:22.658 06:39:36 -- common/autotest_common.sh@10 -- # set +x 00:09:22.658 ************************************ 00:09:22.658 START TEST nvmf_zcopy 00:09:22.658 ************************************ 00:09:22.658 06:39:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:22.658 * Looking for test storage... 00:09:22.658 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:22.658 06:39:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:22.658 06:39:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:22.658 06:39:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:22.917 06:39:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:22.917 06:39:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:22.917 06:39:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:22.917 06:39:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:22.917 06:39:36 -- scripts/common.sh@335 -- # IFS=.-: 00:09:22.917 06:39:36 -- scripts/common.sh@335 -- # read -ra ver1 00:09:22.917 06:39:36 -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.917 06:39:36 -- scripts/common.sh@336 -- # read -ra ver2 00:09:22.917 06:39:36 -- scripts/common.sh@337 -- # local 'op=<' 00:09:22.917 06:39:36 -- scripts/common.sh@339 -- # ver1_l=2 00:09:22.917 06:39:36 -- scripts/common.sh@340 -- # ver2_l=1 00:09:22.917 06:39:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:22.917 06:39:36 -- scripts/common.sh@343 -- # case "$op" in 00:09:22.917 06:39:36 -- scripts/common.sh@344 -- # : 1 00:09:22.917 06:39:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:22.917 06:39:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.917 06:39:36 -- scripts/common.sh@364 -- # decimal 1 00:09:22.917 06:39:36 -- scripts/common.sh@352 -- # local d=1 00:09:22.917 06:39:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.917 06:39:36 -- scripts/common.sh@354 -- # echo 1 00:09:22.917 06:39:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:22.917 06:39:36 -- scripts/common.sh@365 -- # decimal 2 00:09:22.917 06:39:36 -- scripts/common.sh@352 -- # local d=2 00:09:22.917 06:39:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.917 06:39:36 -- scripts/common.sh@354 -- # echo 2 00:09:22.917 06:39:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:22.917 06:39:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:22.918 06:39:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:22.918 06:39:36 -- scripts/common.sh@367 -- # return 0 00:09:22.918 06:39:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.918 06:39:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:22.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.918 --rc genhtml_branch_coverage=1 00:09:22.918 --rc genhtml_function_coverage=1 00:09:22.918 --rc genhtml_legend=1 00:09:22.918 --rc geninfo_all_blocks=1 00:09:22.918 --rc geninfo_unexecuted_blocks=1 00:09:22.918 00:09:22.918 ' 00:09:22.918 06:39:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:22.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.918 --rc genhtml_branch_coverage=1 00:09:22.918 --rc genhtml_function_coverage=1 00:09:22.918 --rc genhtml_legend=1 00:09:22.918 --rc geninfo_all_blocks=1 00:09:22.918 --rc geninfo_unexecuted_blocks=1 00:09:22.918 00:09:22.918 ' 00:09:22.918 06:39:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:22.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.918 --rc genhtml_branch_coverage=1 00:09:22.918 --rc genhtml_function_coverage=1 00:09:22.918 --rc genhtml_legend=1 00:09:22.918 --rc geninfo_all_blocks=1 00:09:22.918 --rc geninfo_unexecuted_blocks=1 00:09:22.918 00:09:22.918 ' 00:09:22.918 06:39:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:22.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.918 --rc genhtml_branch_coverage=1 00:09:22.918 --rc genhtml_function_coverage=1 00:09:22.918 --rc genhtml_legend=1 00:09:22.918 --rc geninfo_all_blocks=1 00:09:22.918 --rc geninfo_unexecuted_blocks=1 00:09:22.918 00:09:22.918 ' 00:09:22.918 06:39:36 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:22.918 06:39:36 -- nvmf/common.sh@7 -- # uname -s 00:09:22.918 06:39:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.918 06:39:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.918 06:39:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.918 06:39:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.918 06:39:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.918 06:39:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.918 06:39:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.918 06:39:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.918 06:39:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.918 06:39:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.918 06:39:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 00:09:22.918 06:39:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=1897a557-42a7-4044-982a-fbab8b2b3e32 00:09:22.918 06:39:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.918 06:39:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.918 06:39:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:22.918 06:39:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:22.918 06:39:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.918 06:39:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.918 06:39:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.918 06:39:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.918 06:39:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.918 06:39:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.918 06:39:36 -- paths/export.sh@5 -- # export PATH 00:09:22.918 06:39:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.918 06:39:36 -- nvmf/common.sh@46 -- # : 0 00:09:22.918 06:39:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:22.918 06:39:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:22.918 06:39:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:22.918 06:39:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.918 06:39:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.918 06:39:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:22.918 06:39:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:22.918 06:39:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:22.918 06:39:36 -- target/zcopy.sh@12 -- # nvmftestinit 00:09:22.918 06:39:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:22.918 06:39:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.918 06:39:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:22.918 06:39:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:22.918 06:39:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:22.918 06:39:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.918 06:39:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:22.918 06:39:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.918 06:39:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:22.918 06:39:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:22.918 06:39:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:22.918 06:39:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:22.918 06:39:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:22.918 06:39:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:22.918 06:39:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:22.918 06:39:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:22.918 06:39:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:22.918 06:39:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:22.918 06:39:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:22.918 06:39:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:22.918 06:39:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:22.918 06:39:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:22.918 06:39:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:22.918 06:39:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:22.918 06:39:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:22.918 06:39:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:22.918 06:39:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:22.918 06:39:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:22.918 Cannot find device "nvmf_tgt_br" 00:09:22.918 06:39:36 -- nvmf/common.sh@154 -- # true 00:09:22.918 06:39:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:22.918 Cannot find device "nvmf_tgt_br2" 00:09:22.918 06:39:36 -- nvmf/common.sh@155 -- # true 00:09:22.918 06:39:36 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:22.918 06:39:36 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:22.918 Cannot find device "nvmf_tgt_br" 00:09:22.918 06:39:36 -- nvmf/common.sh@157 -- # true 00:09:22.918 06:39:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:22.918 Cannot find device "nvmf_tgt_br2" 00:09:22.918 06:39:36 -- nvmf/common.sh@158 -- # true 00:09:22.918 06:39:36 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:22.918 06:39:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:22.918 06:39:36 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:22.918 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:22.918 06:39:36 -- nvmf/common.sh@161 -- # true 00:09:22.918 06:39:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:22.918 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:22.918 06:39:36 -- nvmf/common.sh@162 -- # true 00:09:22.918 06:39:36 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:22.918 06:39:36 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:22.918 06:39:36 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:22.918 06:39:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:22.918 06:39:36 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:22.918 06:39:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:23.178 06:39:36 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:23.178 06:39:36 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:23.178 06:39:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:23.178 06:39:36 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:23.178 06:39:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:23.178 06:39:36 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:23.178 06:39:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:23.178 06:39:36 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:23.178 06:39:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:23.178 06:39:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:23.178 06:39:36 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:23.178 06:39:36 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:23.178 06:39:36 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:23.178 06:39:36 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:23.178 06:39:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:23.178 06:39:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:23.178 06:39:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:23.178 06:39:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:23.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:23.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:09:23.178 00:09:23.178 --- 10.0.0.2 ping statistics --- 00:09:23.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.178 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:09:23.178 06:39:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:23.178 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:23.178 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:09:23.178 00:09:23.178 --- 10.0.0.3 ping statistics --- 00:09:23.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.178 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:09:23.178 06:39:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:23.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:23.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:09:23.178 00:09:23.178 --- 10.0.0.1 ping statistics --- 00:09:23.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.178 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:09:23.178 06:39:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:23.178 06:39:37 -- nvmf/common.sh@421 -- # return 0 00:09:23.178 06:39:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:23.178 06:39:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:23.178 06:39:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:23.178 06:39:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:23.178 06:39:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:23.178 06:39:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:23.178 06:39:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:23.178 06:39:37 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:23.178 06:39:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:23.178 06:39:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:23.178 06:39:37 -- common/autotest_common.sh@10 -- # set +x 00:09:23.178 06:39:37 -- nvmf/common.sh@469 -- # nvmfpid=62705 00:09:23.178 06:39:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:23.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.178 06:39:37 -- nvmf/common.sh@470 -- # waitforlisten 62705 00:09:23.178 06:39:37 -- common/autotest_common.sh@829 -- # '[' -z 62705 ']' 00:09:23.178 06:39:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.178 06:39:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:23.178 06:39:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.178 06:39:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:23.178 06:39:37 -- common/autotest_common.sh@10 -- # set +x 00:09:23.178 [2024-12-14 06:39:37.125455] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:23.178 [2024-12-14 06:39:37.125763] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.437 [2024-12-14 06:39:37.261694] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.437 [2024-12-14 06:39:37.314779] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:23.437 [2024-12-14 06:39:37.315244] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:23.437 [2024-12-14 06:39:37.315372] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:23.437 [2024-12-14 06:39:37.315554] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:23.437 [2024-12-14 06:39:37.315620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.373 06:39:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:24.373 06:39:38 -- common/autotest_common.sh@862 -- # return 0 00:09:24.373 06:39:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:24.373 06:39:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:24.373 06:39:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.373 06:39:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.373 06:39:38 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:24.373 06:39:38 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:24.373 06:39:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.373 06:39:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.373 [2024-12-14 06:39:38.145946] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.373 06:39:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.373 06:39:38 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:24.373 06:39:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.373 06:39:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.373 06:39:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.373 06:39:38 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.373 06:39:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.373 06:39:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.373 [2024-12-14 06:39:38.162053] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.373 06:39:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.373 06:39:38 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:24.373 06:39:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.373 06:39:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.373 06:39:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.373 06:39:38 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:24.373 06:39:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.373 06:39:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.373 malloc0 00:09:24.373 06:39:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.373 06:39:38 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:24.373 06:39:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.373 06:39:38 -- common/autotest_common.sh@10 -- # set +x 00:09:24.373 06:39:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.373 06:39:38 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:24.373 06:39:38 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:24.373 06:39:38 -- nvmf/common.sh@520 -- # config=() 00:09:24.373 06:39:38 -- nvmf/common.sh@520 -- # local subsystem config 00:09:24.373 06:39:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:24.373 06:39:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:24.373 { 00:09:24.373 "params": { 00:09:24.373 "name": "Nvme$subsystem", 00:09:24.373 "trtype": "$TEST_TRANSPORT", 00:09:24.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:24.373 "adrfam": "ipv4", 00:09:24.373 "trsvcid": "$NVMF_PORT", 00:09:24.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:24.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:24.373 "hdgst": ${hdgst:-false}, 00:09:24.373 "ddgst": ${ddgst:-false} 00:09:24.373 }, 00:09:24.373 "method": "bdev_nvme_attach_controller" 00:09:24.373 } 00:09:24.373 EOF 00:09:24.373 )") 00:09:24.373 06:39:38 -- nvmf/common.sh@542 -- # cat 00:09:24.373 06:39:38 -- nvmf/common.sh@544 -- # jq . 00:09:24.373 06:39:38 -- nvmf/common.sh@545 -- # IFS=, 00:09:24.373 06:39:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:24.373 "params": { 00:09:24.373 "name": "Nvme1", 00:09:24.374 "trtype": "tcp", 00:09:24.374 "traddr": "10.0.0.2", 00:09:24.374 "adrfam": "ipv4", 00:09:24.374 "trsvcid": "4420", 00:09:24.374 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:24.374 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:24.374 "hdgst": false, 00:09:24.374 "ddgst": false 00:09:24.374 }, 00:09:24.374 "method": "bdev_nvme_attach_controller" 00:09:24.374 }' 00:09:24.374 [2024-12-14 06:39:38.248940] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:24.374 [2024-12-14 06:39:38.249022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62744 ] 00:09:24.632 [2024-12-14 06:39:38.388064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.632 [2024-12-14 06:39:38.443749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.632 Running I/O for 10 seconds... 00:09:34.611 00:09:34.611 Latency(us) 00:09:34.611 [2024-12-14T06:39:48.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.611 [2024-12-14T06:39:48.603Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:34.611 Verification LBA range: start 0x0 length 0x1000 00:09:34.612 Nvme1n1 : 10.01 9620.94 75.16 0.00 0.00 13270.32 1333.06 22997.18 00:09:34.612 [2024-12-14T06:39:48.604Z] =================================================================================================================== 00:09:34.612 [2024-12-14T06:39:48.604Z] Total : 9620.94 75.16 0.00 0.00 13270.32 1333.06 22997.18 00:09:34.878 06:39:48 -- target/zcopy.sh@39 -- # perfpid=62861 00:09:34.878 06:39:48 -- target/zcopy.sh@41 -- # xtrace_disable 00:09:34.878 06:39:48 -- common/autotest_common.sh@10 -- # set +x 00:09:34.878 06:39:48 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:34.878 06:39:48 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:34.878 06:39:48 -- nvmf/common.sh@520 -- # config=() 00:09:34.878 06:39:48 -- nvmf/common.sh@520 -- # local subsystem config 00:09:34.878 06:39:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:34.878 06:39:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:34.878 { 00:09:34.878 "params": { 00:09:34.878 "name": "Nvme$subsystem", 00:09:34.878 "trtype": "$TEST_TRANSPORT", 00:09:34.878 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:34.878 "adrfam": "ipv4", 00:09:34.878 "trsvcid": "$NVMF_PORT", 00:09:34.878 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:34.878 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:34.878 "hdgst": ${hdgst:-false}, 00:09:34.878 "ddgst": ${ddgst:-false} 00:09:34.878 }, 00:09:34.878 "method": "bdev_nvme_attach_controller" 00:09:34.878 } 00:09:34.878 EOF 00:09:34.878 )") 00:09:34.878 06:39:48 -- nvmf/common.sh@542 -- # cat 00:09:34.878 [2024-12-14 06:39:48.773047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.878 [2024-12-14 06:39:48.773094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.878 06:39:48 -- nvmf/common.sh@544 -- # jq . 00:09:34.878 06:39:48 -- nvmf/common.sh@545 -- # IFS=, 00:09:34.878 06:39:48 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:34.878 "params": { 00:09:34.878 "name": "Nvme1", 00:09:34.878 "trtype": "tcp", 00:09:34.878 "traddr": "10.0.0.2", 00:09:34.878 "adrfam": "ipv4", 00:09:34.878 "trsvcid": "4420", 00:09:34.878 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:34.878 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:34.878 "hdgst": false, 00:09:34.878 "ddgst": false 00:09:34.878 }, 00:09:34.878 "method": "bdev_nvme_attach_controller" 00:09:34.878 }' 00:09:34.878 [2024-12-14 06:39:48.785014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.878 [2024-12-14 06:39:48.785045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.878 [2024-12-14 06:39:48.797023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.878 [2024-12-14 06:39:48.797051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.878 [2024-12-14 06:39:48.809022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.878 [2024-12-14 06:39:48.809048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.878 [2024-12-14 06:39:48.819627] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:34.878 [2024-12-14 06:39:48.819711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62861 ] 00:09:34.878 [2024-12-14 06:39:48.821058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.878 [2024-12-14 06:39:48.821081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.878 [2024-12-14 06:39:48.833039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.878 [2024-12-14 06:39:48.833210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.878 [2024-12-14 06:39:48.845039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.878 [2024-12-14 06:39:48.845221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.878 [2024-12-14 06:39:48.857038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.878 [2024-12-14 06:39:48.857224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.137 [2024-12-14 06:39:48.869051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.137 [2024-12-14 06:39:48.869206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.137 [2024-12-14 06:39:48.881044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.137 [2024-12-14 06:39:48.881242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.137 [2024-12-14 06:39:48.893083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.137 [2024-12-14 06:39:48.893279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.137 [2024-12-14 06:39:48.905058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.137 [2024-12-14 06:39:48.905086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.137 [2024-12-14 06:39:48.917052] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.137 [2024-12-14 06:39:48.917079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.137 [2024-12-14 06:39:48.929055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.137 [2024-12-14 06:39:48.929081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.137 [2024-12-14 06:39:48.941059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.137 [2024-12-14 06:39:48.941085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.137 [2024-12-14 06:39:48.953060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.137 [2024-12-14 06:39:48.953085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.137 [2024-12-14 06:39:48.958829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.137 [2024-12-14 06:39:48.965083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.137 [2024-12-14 06:39:48.965117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.137 [2024-12-14 06:39:48.977093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.137 [2024-12-14 06:39:48.977121] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.137 [2024-12-14 06:39:48.989086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.137 [2024-12-14 06:39:48.989118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.137 [2024-12-14 06:39:49.001100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.137 [2024-12-14 06:39:49.001133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.137 [2024-12-14 06:39:49.011481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.137 [2024-12-14 06:39:49.013079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.137 [2024-12-14 06:39:49.013107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.137 [2024-12-14 06:39:49.025093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.138 [2024-12-14 06:39:49.025122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.138 [2024-12-14 06:39:49.037115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.138 [2024-12-14 06:39:49.037153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.138 [2024-12-14 06:39:49.049125] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.138 [2024-12-14 06:39:49.049162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.138 [2024-12-14 06:39:49.061137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.138 [2024-12-14 06:39:49.061174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.138 [2024-12-14 06:39:49.073132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.138 [2024-12-14 06:39:49.073170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.138 [2024-12-14 06:39:49.085134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.138 [2024-12-14 06:39:49.085166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.138 [2024-12-14 06:39:49.097152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.138 [2024-12-14 06:39:49.097186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.138 [2024-12-14 06:39:49.109147] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.138 [2024-12-14 06:39:49.109178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.138 [2024-12-14 06:39:49.121165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.138 [2024-12-14 06:39:49.121201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.397 [2024-12-14 06:39:49.133189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.397 [2024-12-14 06:39:49.133225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.397 Running I/O for 5 seconds... 00:09:35.397 [2024-12-14 06:39:49.145183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.397 [2024-12-14 06:39:49.145213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.397 [2024-12-14 06:39:49.162597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.397 [2024-12-14 06:39:49.162788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.397 [2024-12-14 06:39:49.177857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.397 [2024-12-14 06:39:49.178054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.397 [2024-12-14 06:39:49.189181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.397 [2024-12-14 06:39:49.189426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.397 [2024-12-14 06:39:49.205354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.397 [2024-12-14 06:39:49.205386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.397 [2024-12-14 06:39:49.222848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.397 [2024-12-14 06:39:49.222909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.397 [2024-12-14 06:39:49.239041] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.397 [2024-12-14 06:39:49.239074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.397 [2024-12-14 06:39:49.256825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.397 [2024-12-14 06:39:49.257022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.397 [2024-12-14 06:39:49.271403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.397 [2024-12-14 06:39:49.271436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.397 [2024-12-14 06:39:49.287023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.397 [2024-12-14 06:39:49.287060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.397 [2024-12-14 06:39:49.304769] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.397 [2024-12-14 06:39:49.304968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.397 [2024-12-14 06:39:49.321162] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.397 [2024-12-14 06:39:49.321205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.397 [2024-12-14 06:39:49.337371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.397 [2024-12-14 06:39:49.337404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.397 [2024-12-14 06:39:49.354490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.397 [2024-12-14 06:39:49.354522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.397 [2024-12-14 06:39:49.371365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.397 [2024-12-14 06:39:49.371398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.657 [2024-12-14 06:39:49.389225] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.657 [2024-12-14 06:39:49.389265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.657 [2024-12-14 06:39:49.404375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.657 [2024-12-14 06:39:49.404565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.657 [2024-12-14 06:39:49.422123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.657 [2024-12-14 06:39:49.422158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.657 [2024-12-14 06:39:49.436825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.657 [2024-12-14 06:39:49.436858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.657 [2024-12-14 06:39:49.445843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.657 [2024-12-14 06:39:49.445891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.657 [2024-12-14 06:39:49.463294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.657 [2024-12-14 06:39:49.463331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.657 [2024-12-14 06:39:49.479638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.657 [2024-12-14 06:39:49.479670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.657 [2024-12-14 06:39:49.498078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.657 [2024-12-14 06:39:49.498112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.657 [2024-12-14 06:39:49.512199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.657 [2024-12-14 06:39:49.512249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.657 [2024-12-14 06:39:49.528676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.657 [2024-12-14 06:39:49.528847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.657 [2024-12-14 06:39:49.543865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.657 [2024-12-14 06:39:49.544041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.657 [2024-12-14 06:39:49.559912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.657 [2024-12-14 06:39:49.560100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.657 [2024-12-14 06:39:49.576120] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.657 [2024-12-14 06:39:49.576297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.657 [2024-12-14 06:39:49.594289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.657 [2024-12-14 06:39:49.594500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.657 [2024-12-14 06:39:49.609642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.657 [2024-12-14 06:39:49.609827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.657 [2024-12-14 06:39:49.627065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.657 [2024-12-14 06:39:49.627210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.657 [2024-12-14 06:39:49.644062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.657 [2024-12-14 06:39:49.644247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.916 [2024-12-14 06:39:49.659794] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.916 [2024-12-14 06:39:49.659971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.916 [2024-12-14 06:39:49.674661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.916 [2024-12-14 06:39:49.674823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.916 [2024-12-14 06:39:49.690590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.916 [2024-12-14 06:39:49.690826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.916 [2024-12-14 06:39:49.706610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.916 [2024-12-14 06:39:49.706774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.916 [2024-12-14 06:39:49.724270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.916 [2024-12-14 06:39:49.724431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.916 [2024-12-14 06:39:49.738429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.916 [2024-12-14 06:39:49.738590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.916 [2024-12-14 06:39:49.754911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.916 [2024-12-14 06:39:49.755089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.916 [2024-12-14 06:39:49.768801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.916 [2024-12-14 06:39:49.768994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.916 [2024-12-14 06:39:49.784303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.916 [2024-12-14 06:39:49.784463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.916 [2024-12-14 06:39:49.802390] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.916 [2024-12-14 06:39:49.802568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.916 [2024-12-14 06:39:49.817511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.916 [2024-12-14 06:39:49.817673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.916 [2024-12-14 06:39:49.834720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.916 [2024-12-14 06:39:49.834910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.917 [2024-12-14 06:39:49.850186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.917 [2024-12-14 06:39:49.850396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.917 [2024-12-14 06:39:49.867803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.917 [2024-12-14 06:39:49.867979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.917 [2024-12-14 06:39:49.883537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.917 [2024-12-14 06:39:49.883700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.917 [2024-12-14 06:39:49.901744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.917 [2024-12-14 06:39:49.901907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.175 [2024-12-14 06:39:49.917236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.176 [2024-12-14 06:39:49.917401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.176 [2024-12-14 06:39:49.933329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.176 [2024-12-14 06:39:49.933486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.176 [2024-12-14 06:39:49.942648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.176 [2024-12-14 06:39:49.942800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.176 [2024-12-14 06:39:49.958410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.176 [2024-12-14 06:39:49.958575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.176 [2024-12-14 06:39:49.976354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.176 [2024-12-14 06:39:49.976509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.176 [2024-12-14 06:39:49.991466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.176 [2024-12-14 06:39:49.991630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.176 [2024-12-14 06:39:50.003198] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.176 [2024-12-14 06:39:50.003390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.176 [2024-12-14 06:39:50.020663] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.176 [2024-12-14 06:39:50.020843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.176 [2024-12-14 06:39:50.036601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.176 [2024-12-14 06:39:50.036799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.176 [2024-12-14 06:39:50.053365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.176 [2024-12-14 06:39:50.053516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.176 [2024-12-14 06:39:50.071280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.176 [2024-12-14 06:39:50.071463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.176 [2024-12-14 06:39:50.085483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.176 [2024-12-14 06:39:50.085723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.176 [2024-12-14 06:39:50.102871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.176 [2024-12-14 06:39:50.103067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.176 [2024-12-14 06:39:50.117005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.176 [2024-12-14 06:39:50.117195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.176 [2024-12-14 06:39:50.133367] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.176 [2024-12-14 06:39:50.133401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.176 [2024-12-14 06:39:50.151024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.176 [2024-12-14 06:39:50.151060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.435 [2024-12-14 06:39:50.166045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.435 [2024-12-14 06:39:50.166081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.435 [2024-12-14 06:39:50.183146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.435 [2024-12-14 06:39:50.183182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.435 [2024-12-14 06:39:50.200316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.435 [2024-12-14 06:39:50.200498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.435 [2024-12-14 06:39:50.214771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.435 [2024-12-14 06:39:50.214806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.435 [2024-12-14 06:39:50.231822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.435 [2024-12-14 06:39:50.232036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.435 [2024-12-14 06:39:50.246110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.435 [2024-12-14 06:39:50.246159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.435 [2024-12-14 06:39:50.261799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.435 [2024-12-14 06:39:50.261836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.435 [2024-12-14 06:39:50.278825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.435 [2024-12-14 06:39:50.278858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.435 [2024-12-14 06:39:50.297510] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.435 [2024-12-14 06:39:50.297544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.435 [2024-12-14 06:39:50.313267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.435 [2024-12-14 06:39:50.313301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.435 [2024-12-14 06:39:50.330515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.435 [2024-12-14 06:39:50.330699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.435 [2024-12-14 06:39:50.346914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.435 [2024-12-14 06:39:50.346997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.435 [2024-12-14 06:39:50.363822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.435 [2024-12-14 06:39:50.363855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.435 [2024-12-14 06:39:50.380021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.435 [2024-12-14 06:39:50.380055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.435 [2024-12-14 06:39:50.399032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.435 [2024-12-14 06:39:50.399067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.435 [2024-12-14 06:39:50.413968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.435 [2024-12-14 06:39:50.414006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.694 [2024-12-14 06:39:50.431608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.694 [2024-12-14 06:39:50.431641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.694 [2024-12-14 06:39:50.448786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.694 [2024-12-14 06:39:50.448819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.694 [2024-12-14 06:39:50.464543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.694 [2024-12-14 06:39:50.464578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.694 [2024-12-14 06:39:50.482517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.694 [2024-12-14 06:39:50.482556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.694 [2024-12-14 06:39:50.497120] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.694 [2024-12-14 06:39:50.497299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.694 [2024-12-14 06:39:50.513938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.694 [2024-12-14 06:39:50.513973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.694 [2024-12-14 06:39:50.530622] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.694 [2024-12-14 06:39:50.530655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.694 [2024-12-14 06:39:50.547165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.694 [2024-12-14 06:39:50.547209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.694 [2024-12-14 06:39:50.564646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.694 [2024-12-14 06:39:50.564835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.694 [2024-12-14 06:39:50.580280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.694 [2024-12-14 06:39:50.580312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.694 [2024-12-14 06:39:50.590911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.694 [2024-12-14 06:39:50.590952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.694 [2024-12-14 06:39:50.607754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.694 [2024-12-14 06:39:50.607790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.694 [2024-12-14 06:39:50.621090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.694 [2024-12-14 06:39:50.621123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.694 [2024-12-14 06:39:50.638362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.694 [2024-12-14 06:39:50.638618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.694 [2024-12-14 06:39:50.653442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.694 [2024-12-14 06:39:50.653626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.694 [2024-12-14 06:39:50.669673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.694 [2024-12-14 06:39:50.669730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.953 [2024-12-14 06:39:50.686852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.953 [2024-12-14 06:39:50.686940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.953 [2024-12-14 06:39:50.701234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.953 [2024-12-14 06:39:50.701298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.953 [2024-12-14 06:39:50.716832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.953 [2024-12-14 06:39:50.717057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.953 [2024-12-14 06:39:50.733485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.953 [2024-12-14 06:39:50.733518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.953 [2024-12-14 06:39:50.750486] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.953 [2024-12-14 06:39:50.750518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.953 [2024-12-14 06:39:50.767578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.953 [2024-12-14 06:39:50.767614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.953 [2024-12-14 06:39:50.783068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.953 [2024-12-14 06:39:50.783100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.953 [2024-12-14 06:39:50.800055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.953 [2024-12-14 06:39:50.800088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.953 [2024-12-14 06:39:50.816578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.953 [2024-12-14 06:39:50.816646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.953 [2024-12-14 06:39:50.833527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.953 [2024-12-14 06:39:50.833604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.953 [2024-12-14 06:39:50.849315] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.953 [2024-12-14 06:39:50.849353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.953 [2024-12-14 06:39:50.868382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.953 [2024-12-14 06:39:50.868419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.953 [2024-12-14 06:39:50.882658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.953 [2024-12-14 06:39:50.882843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.953 [2024-12-14 06:39:50.899927] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.953 [2024-12-14 06:39:50.899971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.953 [2024-12-14 06:39:50.916047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.953 [2024-12-14 06:39:50.916079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.953 [2024-12-14 06:39:50.934208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.953 [2024-12-14 06:39:50.934254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.213 [2024-12-14 06:39:50.949662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.213 [2024-12-14 06:39:50.949697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.213 [2024-12-14 06:39:50.967372] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.213 [2024-12-14 06:39:50.967409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.213 [2024-12-14 06:39:50.982725] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.213 [2024-12-14 06:39:50.982760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.213 [2024-12-14 06:39:50.994220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.213 [2024-12-14 06:39:50.994254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.213 [2024-12-14 06:39:51.010274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.213 [2024-12-14 06:39:51.010307] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.213 [2024-12-14 06:39:51.026502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.213 [2024-12-14 06:39:51.026536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.213 [2024-12-14 06:39:51.044000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.213 [2024-12-14 06:39:51.044064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.213 [2024-12-14 06:39:51.059460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.213 [2024-12-14 06:39:51.059497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.213 [2024-12-14 06:39:51.069005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.213 [2024-12-14 06:39:51.069040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.213 [2024-12-14 06:39:51.084551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.213 [2024-12-14 06:39:51.084759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.213 [2024-12-14 06:39:51.102798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.213 [2024-12-14 06:39:51.102988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.213 [2024-12-14 06:39:51.116493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.213 [2024-12-14 06:39:51.116659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.213 [2024-12-14 06:39:51.131737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.213 [2024-12-14 06:39:51.131929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.213 [2024-12-14 06:39:51.143247] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.213 [2024-12-14 06:39:51.143441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.213 [2024-12-14 06:39:51.159100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.213 [2024-12-14 06:39:51.159292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.213 [2024-12-14 06:39:51.176090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.213 [2024-12-14 06:39:51.176288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.213 [2024-12-14 06:39:51.193118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.213 [2024-12-14 06:39:51.193299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.472 [2024-12-14 06:39:51.208439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.472 [2024-12-14 06:39:51.208603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.472 [2024-12-14 06:39:51.219598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.472 [2024-12-14 06:39:51.219758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.472 [2024-12-14 06:39:51.235758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.472 [2024-12-14 06:39:51.235928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.472 [2024-12-14 06:39:51.252816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.472 [2024-12-14 06:39:51.253040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.472 [2024-12-14 06:39:51.269308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.472 [2024-12-14 06:39:51.269454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.472 [2024-12-14 06:39:51.285967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.472 [2024-12-14 06:39:51.286117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.472 [2024-12-14 06:39:51.301398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.472 [2024-12-14 06:39:51.301568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.472 [2024-12-14 06:39:51.318643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.472 [2024-12-14 06:39:51.318797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.472 [2024-12-14 06:39:51.333114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.472 [2024-12-14 06:39:51.333326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.472 [2024-12-14 06:39:51.348855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.472 [2024-12-14 06:39:51.349045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.472 [2024-12-14 06:39:51.365837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.472 [2024-12-14 06:39:51.366022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.472 [2024-12-14 06:39:51.382312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.472 [2024-12-14 06:39:51.382509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.472 [2024-12-14 06:39:51.398141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.472 [2024-12-14 06:39:51.398378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.472 [2024-12-14 06:39:51.409206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.472 [2024-12-14 06:39:51.409400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.472 [2024-12-14 06:39:51.425149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.472 [2024-12-14 06:39:51.425336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.472 [2024-12-14 06:39:51.440826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.472 [2024-12-14 06:39:51.441061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.472 [2024-12-14 06:39:51.450117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.472 [2024-12-14 06:39:51.450330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.731 [2024-12-14 06:39:51.466325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.731 [2024-12-14 06:39:51.466503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.731 [2024-12-14 06:39:51.484300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.731 [2024-12-14 06:39:51.484565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.731 [2024-12-14 06:39:51.498821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.731 [2024-12-14 06:39:51.498855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.732 [2024-12-14 06:39:51.515573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.732 [2024-12-14 06:39:51.515753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.732 [2024-12-14 06:39:51.532142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.732 [2024-12-14 06:39:51.532197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.732 [2024-12-14 06:39:51.548923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.732 [2024-12-14 06:39:51.548963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.732 [2024-12-14 06:39:51.565359] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.732 [2024-12-14 06:39:51.565393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.732 [2024-12-14 06:39:51.583086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.732 [2024-12-14 06:39:51.583119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.732 [2024-12-14 06:39:51.597583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.732 [2024-12-14 06:39:51.597616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.732 [2024-12-14 06:39:51.614455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.732 [2024-12-14 06:39:51.614643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.732 [2024-12-14 06:39:51.630193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.732 [2024-12-14 06:39:51.630257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.732 [2024-12-14 06:39:51.648655] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.732 [2024-12-14 06:39:51.648960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.732 [2024-12-14 06:39:51.662538] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.732 [2024-12-14 06:39:51.662571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.732 [2024-12-14 06:39:51.678469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.732 [2024-12-14 06:39:51.678501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.732 [2024-12-14 06:39:51.696241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.732 [2024-12-14 06:39:51.696408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.732 [2024-12-14 06:39:51.712016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.732 [2024-12-14 06:39:51.712051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.991 [2024-12-14 06:39:51.727352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.991 [2024-12-14 06:39:51.727563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.991 [2024-12-14 06:39:51.737016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.991 [2024-12-14 06:39:51.737050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.991 [2024-12-14 06:39:51.752618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.991 [2024-12-14 06:39:51.752659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.991 [2024-12-14 06:39:51.770785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.991 [2024-12-14 06:39:51.771098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.991 [2024-12-14 06:39:51.785418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.991 [2024-12-14 06:39:51.785465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.991 [2024-12-14 06:39:51.800837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.991 [2024-12-14 06:39:51.800869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.991 [2024-12-14 06:39:51.819331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.991 [2024-12-14 06:39:51.819367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.991 [2024-12-14 06:39:51.833411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.991 [2024-12-14 06:39:51.833446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.991 [2024-12-14 06:39:51.849377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.991 [2024-12-14 06:39:51.849411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.991 [2024-12-14 06:39:51.866971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.991 [2024-12-14 06:39:51.867003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.991 [2024-12-14 06:39:51.883185] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.991 [2024-12-14 06:39:51.883218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.991 [2024-12-14 06:39:51.899601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.991 [2024-12-14 06:39:51.899637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.991 [2024-12-14 06:39:51.915592] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.991 [2024-12-14 06:39:51.915626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.991 [2024-12-14 06:39:51.933670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.991 [2024-12-14 06:39:51.933725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.991 [2024-12-14 06:39:51.948186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.991 [2024-12-14 06:39:51.948240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.991 [2024-12-14 06:39:51.963809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.991 [2024-12-14 06:39:51.963856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.991 [2024-12-14 06:39:51.973145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.991 [2024-12-14 06:39:51.973179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.250 [2024-12-14 06:39:51.988905] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.250 [2024-12-14 06:39:51.988946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.250 [2024-12-14 06:39:52.004206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.250 [2024-12-14 06:39:52.004239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.250 [2024-12-14 06:39:52.021631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.250 [2024-12-14 06:39:52.021664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.251 [2024-12-14 06:39:52.037711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.251 [2024-12-14 06:39:52.037761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.251 [2024-12-14 06:39:52.053662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.251 [2024-12-14 06:39:52.053696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.251 [2024-12-14 06:39:52.070462] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.251 [2024-12-14 06:39:52.070660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.251 [2024-12-14 06:39:52.086086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.251 [2024-12-14 06:39:52.086285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.251 [2024-12-14 06:39:52.097567] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.251 [2024-12-14 06:39:52.097772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.251 [2024-12-14 06:39:52.113469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.251 [2024-12-14 06:39:52.113645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.251 [2024-12-14 06:39:52.129822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.251 [2024-12-14 06:39:52.130025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.251 [2024-12-14 06:39:52.146347] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.251 [2024-12-14 06:39:52.146523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.251 [2024-12-14 06:39:52.163131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.251 [2024-12-14 06:39:52.163295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.251 [2024-12-14 06:39:52.179932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.251 [2024-12-14 06:39:52.180107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.251 [2024-12-14 06:39:52.196113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.251 [2024-12-14 06:39:52.196297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.251 [2024-12-14 06:39:52.212862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.251 [2024-12-14 06:39:52.213107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.251 [2024-12-14 06:39:52.229593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.251 [2024-12-14 06:39:52.229767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.510 [2024-12-14 06:39:52.246504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.510 [2024-12-14 06:39:52.246682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.510 [2024-12-14 06:39:52.262601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.510 [2024-12-14 06:39:52.262782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.510 [2024-12-14 06:39:52.280498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.510 [2024-12-14 06:39:52.280678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.510 [2024-12-14 06:39:52.296157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.510 [2024-12-14 06:39:52.296339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.510 [2024-12-14 06:39:52.305637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.510 [2024-12-14 06:39:52.305843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.510 [2024-12-14 06:39:52.320723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.510 [2024-12-14 06:39:52.320925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.510 [2024-12-14 06:39:52.337514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.510 [2024-12-14 06:39:52.337694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.510 [2024-12-14 06:39:52.352528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.510 [2024-12-14 06:39:52.352692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.510 [2024-12-14 06:39:52.362517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.510 [2024-12-14 06:39:52.362550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.510 [2024-12-14 06:39:52.377493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.510 [2024-12-14 06:39:52.377527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.510 [2024-12-14 06:39:52.392481] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.510 [2024-12-14 06:39:52.392514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.510 [2024-12-14 06:39:52.403778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.510 [2024-12-14 06:39:52.403811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.510 [2024-12-14 06:39:52.420167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.510 [2024-12-14 06:39:52.420202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.510 [2024-12-14 06:39:52.436598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.510 [2024-12-14 06:39:52.436633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.510 [2024-12-14 06:39:52.453344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.510 [2024-12-14 06:39:52.453375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.510 [2024-12-14 06:39:52.471528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.510 [2024-12-14 06:39:52.471562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.510 [2024-12-14 06:39:52.487132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.510 [2024-12-14 06:39:52.487164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.510 [2024-12-14 06:39:52.498611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.769 [2024-12-14 06:39:52.498790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.769 [2024-12-14 06:39:52.514807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.769 [2024-12-14 06:39:52.514841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.769 [2024-12-14 06:39:52.530803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.769 [2024-12-14 06:39:52.530836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.769 [2024-12-14 06:39:52.548014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.769 [2024-12-14 06:39:52.548047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.769 [2024-12-14 06:39:52.566309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.769 [2024-12-14 06:39:52.566340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.769 [2024-12-14 06:39:52.581741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.769 [2024-12-14 06:39:52.581771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.769 [2024-12-14 06:39:52.598023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.769 [2024-12-14 06:39:52.598083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.769 [2024-12-14 06:39:52.614591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.769 [2024-12-14 06:39:52.614636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.769 [2024-12-14 06:39:52.631400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.769 [2024-12-14 06:39:52.631444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.769 [2024-12-14 06:39:52.648488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.769 [2024-12-14 06:39:52.648532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.769 [2024-12-14 06:39:52.664728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.769 [2024-12-14 06:39:52.664772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.769 [2024-12-14 06:39:52.681357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.769 [2024-12-14 06:39:52.681401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.769 [2024-12-14 06:39:52.700119] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.769 [2024-12-14 06:39:52.700150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.769 [2024-12-14 06:39:52.714002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.769 [2024-12-14 06:39:52.714049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.769 [2024-12-14 06:39:52.729201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.769 [2024-12-14 06:39:52.729231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.769 [2024-12-14 06:39:52.747442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.769 [2024-12-14 06:39:52.747487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.028 [2024-12-14 06:39:52.763233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.028 [2024-12-14 06:39:52.763292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.028 [2024-12-14 06:39:52.781157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.028 [2024-12-14 06:39:52.781200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.028 [2024-12-14 06:39:52.796216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.028 [2024-12-14 06:39:52.796246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.028 [2024-12-14 06:39:52.807200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.028 [2024-12-14 06:39:52.807231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.028 [2024-12-14 06:39:52.822169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.028 [2024-12-14 06:39:52.822197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.028 [2024-12-14 06:39:52.839857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.028 [2024-12-14 06:39:52.839913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.028 [2024-12-14 06:39:52.855086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.028 [2024-12-14 06:39:52.855131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.028 [2024-12-14 06:39:52.864544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.028 [2024-12-14 06:39:52.864589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.028 [2024-12-14 06:39:52.879144] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.028 [2024-12-14 06:39:52.879188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.029 [2024-12-14 06:39:52.894574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.029 [2024-12-14 06:39:52.894644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.029 [2024-12-14 06:39:52.910860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.029 [2024-12-14 06:39:52.910940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.029 [2024-12-14 06:39:52.928885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.029 [2024-12-14 06:39:52.928936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.029 [2024-12-14 06:39:52.944233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.029 [2024-12-14 06:39:52.944278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.029 [2024-12-14 06:39:52.962335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.029 [2024-12-14 06:39:52.962381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.029 [2024-12-14 06:39:52.977928] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.029 [2024-12-14 06:39:52.977964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.029 [2024-12-14 06:39:52.994622] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.029 [2024-12-14 06:39:52.994665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.029 [2024-12-14 06:39:53.011393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.029 [2024-12-14 06:39:53.011437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.288 [2024-12-14 06:39:53.026180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.288 [2024-12-14 06:39:53.026238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.288 [2024-12-14 06:39:53.042542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.288 [2024-12-14 06:39:53.042586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.288 [2024-12-14 06:39:53.058712] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.288 [2024-12-14 06:39:53.058757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.288 [2024-12-14 06:39:53.076928] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.288 [2024-12-14 06:39:53.076971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.288 [2024-12-14 06:39:53.090795] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.288 [2024-12-14 06:39:53.090839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.288 [2024-12-14 06:39:53.107149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.288 [2024-12-14 06:39:53.107192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.288 [2024-12-14 06:39:53.122566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.288 [2024-12-14 06:39:53.122609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.288 [2024-12-14 06:39:53.139868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.288 [2024-12-14 06:39:53.139920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.288 [2024-12-14 06:39:53.155335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.288 [2024-12-14 06:39:53.155394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.288 [2024-12-14 06:39:53.172522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.288 [2024-12-14 06:39:53.172565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.288 [2024-12-14 06:39:53.189739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.288 [2024-12-14 06:39:53.189770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.288 [2024-12-14 06:39:53.207017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.288 [2024-12-14 06:39:53.207070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.288 [2024-12-14 06:39:53.223693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.288 [2024-12-14 06:39:53.223767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.288 [2024-12-14 06:39:53.240415] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.288 [2024-12-14 06:39:53.240459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.288 [2024-12-14 06:39:53.257614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.288 [2024-12-14 06:39:53.257658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.288 [2024-12-14 06:39:53.275746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.288 [2024-12-14 06:39:53.275792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.548 [2024-12-14 06:39:53.290331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.548 [2024-12-14 06:39:53.290374] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.548 [2024-12-14 06:39:53.301395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.548 [2024-12-14 06:39:53.301438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.548 [2024-12-14 06:39:53.316965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.548 [2024-12-14 06:39:53.316989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.548 [2024-12-14 06:39:53.334339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.548 [2024-12-14 06:39:53.334382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.548 [2024-12-14 06:39:53.349804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.548 [2024-12-14 06:39:53.349835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.548 [2024-12-14 06:39:53.366884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.548 [2024-12-14 06:39:53.366962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.548 [2024-12-14 06:39:53.384855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.548 [2024-12-14 06:39:53.384922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.548 [2024-12-14 06:39:53.399452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.548 [2024-12-14 06:39:53.399495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.548 [2024-12-14 06:39:53.416057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.548 [2024-12-14 06:39:53.416087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.548 [2024-12-14 06:39:53.431164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.548 [2024-12-14 06:39:53.431228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.548 [2024-12-14 06:39:53.440821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.548 [2024-12-14 06:39:53.440864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.548 [2024-12-14 06:39:53.457136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.548 [2024-12-14 06:39:53.457167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.548 [2024-12-14 06:39:53.474586] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.548 [2024-12-14 06:39:53.474631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.548 [2024-12-14 06:39:53.490655] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.548 [2024-12-14 06:39:53.490700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.548 [2024-12-14 06:39:53.507950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.548 [2024-12-14 06:39:53.508020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.548 [2024-12-14 06:39:53.523345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.548 [2024-12-14 06:39:53.523388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.548 [2024-12-14 06:39:53.531936] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.548 [2024-12-14 06:39:53.531960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.807 [2024-12-14 06:39:53.547243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.807 [2024-12-14 06:39:53.547287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.807 [2024-12-14 06:39:53.563982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.807 [2024-12-14 06:39:53.564012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.808 [2024-12-14 06:39:53.578690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.808 [2024-12-14 06:39:53.578753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.808 [2024-12-14 06:39:53.587736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.808 [2024-12-14 06:39:53.587792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.808 [2024-12-14 06:39:53.604439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.808 [2024-12-14 06:39:53.604495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.808 [2024-12-14 06:39:53.620926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.808 [2024-12-14 06:39:53.620966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.808 [2024-12-14 06:39:53.638327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.808 [2024-12-14 06:39:53.638369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.808 [2024-12-14 06:39:53.654716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.808 [2024-12-14 06:39:53.654761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.808 [2024-12-14 06:39:53.672596] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.808 [2024-12-14 06:39:53.672640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.808 [2024-12-14 06:39:53.687939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.808 [2024-12-14 06:39:53.687964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.808 [2024-12-14 06:39:53.705584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.808 [2024-12-14 06:39:53.705627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.808 [2024-12-14 06:39:53.722113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.808 [2024-12-14 06:39:53.722156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.808 [2024-12-14 06:39:53.738816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.808 [2024-12-14 06:39:53.738860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.808 [2024-12-14 06:39:53.755586] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.808 [2024-12-14 06:39:53.755630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.808 [2024-12-14 06:39:53.772035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.808 [2024-12-14 06:39:53.772064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.808 [2024-12-14 06:39:53.789143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.808 [2024-12-14 06:39:53.789192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.067 [2024-12-14 06:39:53.804879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.067 [2024-12-14 06:39:53.804932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.067 [2024-12-14 06:39:53.816050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.067 [2024-12-14 06:39:53.816079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.067 [2024-12-14 06:39:53.831951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.067 [2024-12-14 06:39:53.832004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.067 [2024-12-14 06:39:53.848407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.067 [2024-12-14 06:39:53.848473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.067 [2024-12-14 06:39:53.866167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.067 [2024-12-14 06:39:53.866200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.067 [2024-12-14 06:39:53.880522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.067 [2024-12-14 06:39:53.880568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.067 [2024-12-14 06:39:53.898401] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.067 [2024-12-14 06:39:53.898448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.067 [2024-12-14 06:39:53.912749] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.067 [2024-12-14 06:39:53.912793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.067 [2024-12-14 06:39:53.928686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.067 [2024-12-14 06:39:53.928731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.067 [2024-12-14 06:39:53.944667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.067 [2024-12-14 06:39:53.944696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.067 [2024-12-14 06:39:53.962095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.067 [2024-12-14 06:39:53.962170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.067 [2024-12-14 06:39:53.978829] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.067 [2024-12-14 06:39:53.978907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.067 [2024-12-14 06:39:53.994557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.067 [2024-12-14 06:39:53.994593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.067 [2024-12-14 06:39:54.011907] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.067 [2024-12-14 06:39:54.011960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.067 [2024-12-14 06:39:54.025846] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.067 [2024-12-14 06:39:54.025889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.067 [2024-12-14 06:39:54.040711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.067 [2024-12-14 06:39:54.040770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.067 [2024-12-14 06:39:54.051861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.067 [2024-12-14 06:39:54.051930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.327 [2024-12-14 06:39:54.067280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.327 [2024-12-14 06:39:54.067324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.327 [2024-12-14 06:39:54.085252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.327 [2024-12-14 06:39:54.085297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.327 [2024-12-14 06:39:54.100703] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.327 [2024-12-14 06:39:54.100748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.327 [2024-12-14 06:39:54.117484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.327 [2024-12-14 06:39:54.117528] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.327 [2024-12-14 06:39:54.135425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.327 [2024-12-14 06:39:54.135490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.327 [2024-12-14 06:39:54.147927] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.327 [2024-12-14 06:39:54.147967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.327 00:09:40.327 Latency(us) 00:09:40.327 [2024-12-14T06:39:54.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.327 [2024-12-14T06:39:54.319Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:40.327 Nvme1n1 : 5.01 12782.95 99.87 0.00 0.00 10000.79 4081.11 19660.80 00:09:40.327 [2024-12-14T06:39:54.319Z] =================================================================================================================== 00:09:40.327 [2024-12-14T06:39:54.319Z] Total : 12782.95 99.87 0.00 0.00 10000.79 4081.11 19660.80 00:09:40.327 [2024-12-14 06:39:54.157781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.327 [2024-12-14 06:39:54.157810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.327 [2024-12-14 06:39:54.169772] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.327 [2024-12-14 06:39:54.169797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.327 [2024-12-14 06:39:54.181805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.327 [2024-12-14 06:39:54.181841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.327 [2024-12-14 06:39:54.193809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.327 [2024-12-14 06:39:54.193846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.327 [2024-12-14 06:39:54.205834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.327 [2024-12-14 06:39:54.205871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.327 [2024-12-14 06:39:54.217817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.327 [2024-12-14 06:39:54.217856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.327 [2024-12-14 06:39:54.229817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.327 [2024-12-14 06:39:54.229851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.327 [2024-12-14 06:39:54.241792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.327 [2024-12-14 06:39:54.241816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.327 [2024-12-14 06:39:54.253792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.327 [2024-12-14 06:39:54.253815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.327 [2024-12-14 06:39:54.265824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.327 [2024-12-14 06:39:54.265857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.327 [2024-12-14 06:39:54.277830] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.327 [2024-12-14 06:39:54.277857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.327 [2024-12-14 06:39:54.289820] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.327 [2024-12-14 06:39:54.289844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.327 [2024-12-14 06:39:54.301824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.327 [2024-12-14 06:39:54.301847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.327 [2024-12-14 06:39:54.313849] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.327 [2024-12-14 06:39:54.313879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.587 [2024-12-14 06:39:54.325830] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.587 [2024-12-14 06:39:54.325853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.587 [2024-12-14 06:39:54.337847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.587 [2024-12-14 06:39:54.337877] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.587 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (62861) - No such process 00:09:40.587 06:39:54 -- target/zcopy.sh@49 -- # wait 62861 00:09:40.587 06:39:54 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.587 06:39:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.587 06:39:54 -- common/autotest_common.sh@10 -- # set +x 00:09:40.587 06:39:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.587 06:39:54 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:40.587 06:39:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.587 06:39:54 -- common/autotest_common.sh@10 -- # set +x 00:09:40.587 delay0 00:09:40.587 06:39:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.587 06:39:54 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:40.587 06:39:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.587 06:39:54 -- common/autotest_common.sh@10 -- # set +x 00:09:40.587 06:39:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.587 06:39:54 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:40.587 [2024-12-14 06:39:54.522785] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:47.153 Initializing NVMe Controllers 00:09:47.153 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:47.153 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:47.153 Initialization complete. Launching workers. 00:09:47.153 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 76 00:09:47.153 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 363, failed to submit 33 00:09:47.153 success 246, unsuccess 117, failed 0 00:09:47.153 06:40:00 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:47.153 06:40:00 -- target/zcopy.sh@60 -- # nvmftestfini 00:09:47.153 06:40:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:47.153 06:40:00 -- nvmf/common.sh@116 -- # sync 00:09:47.153 06:40:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:47.153 06:40:00 -- nvmf/common.sh@119 -- # set +e 00:09:47.153 06:40:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:47.153 06:40:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:47.153 rmmod nvme_tcp 00:09:47.153 rmmod nvme_fabrics 00:09:47.153 rmmod nvme_keyring 00:09:47.153 06:40:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:47.153 06:40:00 -- nvmf/common.sh@123 -- # set -e 00:09:47.153 06:40:00 -- nvmf/common.sh@124 -- # return 0 00:09:47.153 06:40:00 -- nvmf/common.sh@477 -- # '[' -n 62705 ']' 00:09:47.153 06:40:00 -- nvmf/common.sh@478 -- # killprocess 62705 00:09:47.153 06:40:00 -- common/autotest_common.sh@936 -- # '[' -z 62705 ']' 00:09:47.153 06:40:00 -- common/autotest_common.sh@940 -- # kill -0 62705 00:09:47.154 06:40:00 -- common/autotest_common.sh@941 -- # uname 00:09:47.154 06:40:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:47.154 06:40:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62705 00:09:47.154 06:40:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:47.154 06:40:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:47.154 killing process with pid 62705 00:09:47.154 06:40:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62705' 00:09:47.154 06:40:00 -- common/autotest_common.sh@955 -- # kill 62705 00:09:47.154 06:40:00 -- common/autotest_common.sh@960 -- # wait 62705 00:09:47.154 06:40:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:47.154 06:40:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:47.154 06:40:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:47.154 06:40:00 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:47.154 06:40:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:47.154 06:40:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.154 06:40:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:47.154 06:40:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.154 06:40:00 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:47.154 ************************************ 00:09:47.154 END TEST nvmf_zcopy 00:09:47.154 ************************************ 00:09:47.154 00:09:47.154 real 0m24.417s 00:09:47.154 user 0m40.183s 00:09:47.154 sys 0m6.412s 00:09:47.154 06:40:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:47.154 06:40:00 -- common/autotest_common.sh@10 -- # set +x 00:09:47.154 06:40:00 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:47.154 06:40:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:47.154 06:40:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:47.154 06:40:00 -- common/autotest_common.sh@10 -- # set +x 00:09:47.154 ************************************ 00:09:47.154 START TEST nvmf_nmic 00:09:47.154 ************************************ 00:09:47.154 06:40:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:47.154 * Looking for test storage... 00:09:47.154 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:47.154 06:40:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:47.154 06:40:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:47.154 06:40:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:47.154 06:40:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:47.154 06:40:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:47.154 06:40:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:47.154 06:40:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:47.154 06:40:01 -- scripts/common.sh@335 -- # IFS=.-: 00:09:47.154 06:40:01 -- scripts/common.sh@335 -- # read -ra ver1 00:09:47.154 06:40:01 -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.154 06:40:01 -- scripts/common.sh@336 -- # read -ra ver2 00:09:47.154 06:40:01 -- scripts/common.sh@337 -- # local 'op=<' 00:09:47.154 06:40:01 -- scripts/common.sh@339 -- # ver1_l=2 00:09:47.154 06:40:01 -- scripts/common.sh@340 -- # ver2_l=1 00:09:47.154 06:40:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:47.154 06:40:01 -- scripts/common.sh@343 -- # case "$op" in 00:09:47.154 06:40:01 -- scripts/common.sh@344 -- # : 1 00:09:47.154 06:40:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:47.154 06:40:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.154 06:40:01 -- scripts/common.sh@364 -- # decimal 1 00:09:47.154 06:40:01 -- scripts/common.sh@352 -- # local d=1 00:09:47.154 06:40:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.154 06:40:01 -- scripts/common.sh@354 -- # echo 1 00:09:47.154 06:40:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:47.154 06:40:01 -- scripts/common.sh@365 -- # decimal 2 00:09:47.154 06:40:01 -- scripts/common.sh@352 -- # local d=2 00:09:47.154 06:40:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.154 06:40:01 -- scripts/common.sh@354 -- # echo 2 00:09:47.154 06:40:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:47.154 06:40:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:47.154 06:40:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:47.154 06:40:01 -- scripts/common.sh@367 -- # return 0 00:09:47.154 06:40:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.154 06:40:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:47.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.154 --rc genhtml_branch_coverage=1 00:09:47.154 --rc genhtml_function_coverage=1 00:09:47.154 --rc genhtml_legend=1 00:09:47.154 --rc geninfo_all_blocks=1 00:09:47.154 --rc geninfo_unexecuted_blocks=1 00:09:47.154 00:09:47.154 ' 00:09:47.154 06:40:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:47.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.154 --rc genhtml_branch_coverage=1 00:09:47.154 --rc genhtml_function_coverage=1 00:09:47.154 --rc genhtml_legend=1 00:09:47.154 --rc geninfo_all_blocks=1 00:09:47.154 --rc geninfo_unexecuted_blocks=1 00:09:47.154 00:09:47.154 ' 00:09:47.154 06:40:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:47.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.154 --rc genhtml_branch_coverage=1 00:09:47.154 --rc genhtml_function_coverage=1 00:09:47.154 --rc genhtml_legend=1 00:09:47.154 --rc geninfo_all_blocks=1 00:09:47.154 --rc geninfo_unexecuted_blocks=1 00:09:47.154 00:09:47.154 ' 00:09:47.154 06:40:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:47.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.154 --rc genhtml_branch_coverage=1 00:09:47.154 --rc genhtml_function_coverage=1 00:09:47.154 --rc genhtml_legend=1 00:09:47.154 --rc geninfo_all_blocks=1 00:09:47.154 --rc geninfo_unexecuted_blocks=1 00:09:47.154 00:09:47.154 ' 00:09:47.154 06:40:01 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:47.414 06:40:01 -- nvmf/common.sh@7 -- # uname -s 00:09:47.414 06:40:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.414 06:40:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.414 06:40:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.414 06:40:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.414 06:40:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.414 06:40:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.414 06:40:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.414 06:40:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.414 06:40:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.414 06:40:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.414 06:40:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 00:09:47.414 06:40:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=1897a557-42a7-4044-982a-fbab8b2b3e32 00:09:47.414 06:40:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.414 06:40:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.414 06:40:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:47.414 06:40:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:47.414 06:40:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.414 06:40:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.414 06:40:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.414 06:40:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.414 06:40:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.414 06:40:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.414 06:40:01 -- paths/export.sh@5 -- # export PATH 00:09:47.414 06:40:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.414 06:40:01 -- nvmf/common.sh@46 -- # : 0 00:09:47.414 06:40:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:47.414 06:40:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:47.414 06:40:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:47.414 06:40:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.414 06:40:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.414 06:40:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:47.414 06:40:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:47.414 06:40:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:47.414 06:40:01 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:47.414 06:40:01 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:47.414 06:40:01 -- target/nmic.sh@14 -- # nvmftestinit 00:09:47.414 06:40:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:47.414 06:40:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.414 06:40:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:47.414 06:40:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:47.414 06:40:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:47.414 06:40:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.414 06:40:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:47.414 06:40:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.414 06:40:01 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:47.414 06:40:01 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:47.414 06:40:01 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:47.414 06:40:01 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:47.414 06:40:01 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:47.414 06:40:01 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:47.414 06:40:01 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.414 06:40:01 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:47.414 06:40:01 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:47.414 06:40:01 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:47.414 06:40:01 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:47.414 06:40:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:47.414 06:40:01 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:47.414 06:40:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.414 06:40:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:47.414 06:40:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:47.414 06:40:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:47.414 06:40:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:47.414 06:40:01 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:47.414 06:40:01 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:47.414 Cannot find device "nvmf_tgt_br" 00:09:47.414 06:40:01 -- nvmf/common.sh@154 -- # true 00:09:47.414 06:40:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:47.414 Cannot find device "nvmf_tgt_br2" 00:09:47.414 06:40:01 -- nvmf/common.sh@155 -- # true 00:09:47.414 06:40:01 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:47.414 06:40:01 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:47.414 Cannot find device "nvmf_tgt_br" 00:09:47.414 06:40:01 -- nvmf/common.sh@157 -- # true 00:09:47.414 06:40:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:47.414 Cannot find device "nvmf_tgt_br2" 00:09:47.414 06:40:01 -- nvmf/common.sh@158 -- # true 00:09:47.414 06:40:01 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:47.414 06:40:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:47.414 06:40:01 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:47.414 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:47.414 06:40:01 -- nvmf/common.sh@161 -- # true 00:09:47.414 06:40:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:47.414 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:47.414 06:40:01 -- nvmf/common.sh@162 -- # true 00:09:47.414 06:40:01 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:47.414 06:40:01 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:47.414 06:40:01 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:47.414 06:40:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:47.414 06:40:01 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:47.414 06:40:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:47.414 06:40:01 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:47.414 06:40:01 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:47.414 06:40:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:47.414 06:40:01 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:47.414 06:40:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:47.414 06:40:01 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:47.414 06:40:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:47.673 06:40:01 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:47.673 06:40:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:47.673 06:40:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:47.673 06:40:01 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:47.673 06:40:01 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:47.673 06:40:01 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:47.673 06:40:01 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:47.673 06:40:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:47.673 06:40:01 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:47.673 06:40:01 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:47.673 06:40:01 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:47.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:09:47.673 00:09:47.673 --- 10.0.0.2 ping statistics --- 00:09:47.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.673 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:09:47.673 06:40:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:47.673 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:47.673 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:09:47.673 00:09:47.673 --- 10.0.0.3 ping statistics --- 00:09:47.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.673 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:09:47.673 06:40:01 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:47.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:09:47.673 00:09:47.673 --- 10.0.0.1 ping statistics --- 00:09:47.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.673 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:09:47.673 06:40:01 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.673 06:40:01 -- nvmf/common.sh@421 -- # return 0 00:09:47.673 06:40:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:47.673 06:40:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.673 06:40:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:47.673 06:40:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:47.673 06:40:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.673 06:40:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:47.673 06:40:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:47.673 06:40:01 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:47.673 06:40:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:47.673 06:40:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:47.673 06:40:01 -- common/autotest_common.sh@10 -- # set +x 00:09:47.673 06:40:01 -- nvmf/common.sh@469 -- # nvmfpid=63188 00:09:47.673 06:40:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:47.673 06:40:01 -- nvmf/common.sh@470 -- # waitforlisten 63188 00:09:47.673 06:40:01 -- common/autotest_common.sh@829 -- # '[' -z 63188 ']' 00:09:47.673 06:40:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.673 06:40:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:47.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.673 06:40:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.673 06:40:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:47.673 06:40:01 -- common/autotest_common.sh@10 -- # set +x 00:09:47.673 [2024-12-14 06:40:01.589850] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:47.673 [2024-12-14 06:40:01.590159] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.932 [2024-12-14 06:40:01.727972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:47.932 [2024-12-14 06:40:01.798595] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:47.932 [2024-12-14 06:40:01.798772] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.932 [2024-12-14 06:40:01.798788] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.932 [2024-12-14 06:40:01.798799] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.932 [2024-12-14 06:40:01.799164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.932 [2024-12-14 06:40:01.799674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.932 [2024-12-14 06:40:01.799849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:47.932 [2024-12-14 06:40:01.799858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.868 06:40:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:48.868 06:40:02 -- common/autotest_common.sh@862 -- # return 0 00:09:48.868 06:40:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:48.868 06:40:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:48.868 06:40:02 -- common/autotest_common.sh@10 -- # set +x 00:09:48.868 06:40:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.868 06:40:02 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:48.868 06:40:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.868 06:40:02 -- common/autotest_common.sh@10 -- # set +x 00:09:48.868 [2024-12-14 06:40:02.668190] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:48.868 06:40:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.868 06:40:02 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:48.868 06:40:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.868 06:40:02 -- common/autotest_common.sh@10 -- # set +x 00:09:48.868 Malloc0 00:09:48.868 06:40:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.868 06:40:02 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:48.868 06:40:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.868 06:40:02 -- common/autotest_common.sh@10 -- # set +x 00:09:48.868 06:40:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.868 06:40:02 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:48.868 06:40:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.868 06:40:02 -- common/autotest_common.sh@10 -- # set +x 00:09:48.868 06:40:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.868 06:40:02 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:48.868 06:40:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.868 06:40:02 -- common/autotest_common.sh@10 -- # set +x 00:09:48.868 [2024-12-14 06:40:02.731521] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:48.868 test case1: single bdev can't be used in multiple subsystems 00:09:48.868 06:40:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.868 06:40:02 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:48.868 06:40:02 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:48.868 06:40:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.868 06:40:02 -- common/autotest_common.sh@10 -- # set +x 00:09:48.868 06:40:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.868 06:40:02 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:48.868 06:40:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.868 06:40:02 -- common/autotest_common.sh@10 -- # set +x 00:09:48.868 06:40:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.868 06:40:02 -- target/nmic.sh@28 -- # nmic_status=0 00:09:48.868 06:40:02 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:48.868 06:40:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.868 06:40:02 -- common/autotest_common.sh@10 -- # set +x 00:09:48.868 [2024-12-14 06:40:02.755384] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:48.868 [2024-12-14 06:40:02.755422] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:48.868 [2024-12-14 06:40:02.755450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.868 request: 00:09:48.868 { 00:09:48.868 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:48.868 "namespace": { 00:09:48.868 "bdev_name": "Malloc0" 00:09:48.868 }, 00:09:48.868 "method": "nvmf_subsystem_add_ns", 00:09:48.868 "req_id": 1 00:09:48.868 } 00:09:48.868 Got JSON-RPC error response 00:09:48.868 response: 00:09:48.868 { 00:09:48.868 "code": -32602, 00:09:48.868 "message": "Invalid parameters" 00:09:48.868 } 00:09:48.868 06:40:02 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:48.868 06:40:02 -- target/nmic.sh@29 -- # nmic_status=1 00:09:48.868 06:40:02 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:48.868 06:40:02 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:48.868 Adding namespace failed - expected result. 00:09:48.868 06:40:02 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:48.868 test case2: host connect to nvmf target in multiple paths 00:09:48.868 06:40:02 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:48.868 06:40:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.868 06:40:02 -- common/autotest_common.sh@10 -- # set +x 00:09:48.868 [2024-12-14 06:40:02.771522] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:48.868 06:40:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.868 06:40:02 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 --hostid=1897a557-42a7-4044-982a-fbab8b2b3e32 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:49.127 06:40:02 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 --hostid=1897a557-42a7-4044-982a-fbab8b2b3e32 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:49.127 06:40:03 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:49.127 06:40:03 -- common/autotest_common.sh@1187 -- # local i=0 00:09:49.127 06:40:03 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:09:49.127 06:40:03 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:09:49.127 06:40:03 -- common/autotest_common.sh@1194 -- # sleep 2 00:09:51.659 06:40:05 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:09:51.659 06:40:05 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:09:51.659 06:40:05 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:09:51.659 06:40:05 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:09:51.659 06:40:05 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:09:51.659 06:40:05 -- common/autotest_common.sh@1197 -- # return 0 00:09:51.659 06:40:05 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:51.659 [global] 00:09:51.659 thread=1 00:09:51.659 invalidate=1 00:09:51.659 rw=write 00:09:51.659 time_based=1 00:09:51.659 runtime=1 00:09:51.659 ioengine=libaio 00:09:51.659 direct=1 00:09:51.659 bs=4096 00:09:51.659 iodepth=1 00:09:51.659 norandommap=0 00:09:51.659 numjobs=1 00:09:51.659 00:09:51.659 verify_dump=1 00:09:51.659 verify_backlog=512 00:09:51.659 verify_state_save=0 00:09:51.659 do_verify=1 00:09:51.659 verify=crc32c-intel 00:09:51.659 [job0] 00:09:51.659 filename=/dev/nvme0n1 00:09:51.659 Could not set queue depth (nvme0n1) 00:09:51.659 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.659 fio-3.35 00:09:51.659 Starting 1 thread 00:09:52.595 00:09:52.595 job0: (groupid=0, jobs=1): err= 0: pid=63280: Sat Dec 14 06:40:06 2024 00:09:52.595 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:52.595 slat (nsec): min=10621, max=81580, avg=14281.19, stdev=4782.78 00:09:52.595 clat (usec): min=124, max=524, avg=173.21, stdev=23.52 00:09:52.595 lat (usec): min=136, max=534, avg=187.49, stdev=24.69 00:09:52.595 clat percentiles (usec): 00:09:52.595 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 153], 00:09:52.595 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 178], 00:09:52.595 | 70.00th=[ 184], 80.00th=[ 192], 90.00th=[ 206], 95.00th=[ 215], 00:09:52.595 | 99.00th=[ 233], 99.50th=[ 241], 99.90th=[ 258], 99.95th=[ 269], 00:09:52.595 | 99.99th=[ 523] 00:09:52.595 write: IOPS=3175, BW=12.4MiB/s (13.0MB/s)(12.4MiB/1001msec); 0 zone resets 00:09:52.595 slat (usec): min=12, max=142, avg=21.15, stdev= 6.70 00:09:52.595 clat (usec): min=80, max=227, avg=109.04, stdev=17.72 00:09:52.595 lat (usec): min=96, max=370, avg=130.19, stdev=19.91 00:09:52.595 clat percentiles (usec): 00:09:52.595 | 1.00th=[ 84], 5.00th=[ 88], 10.00th=[ 90], 20.00th=[ 94], 00:09:52.595 | 30.00th=[ 98], 40.00th=[ 101], 50.00th=[ 105], 60.00th=[ 111], 00:09:52.595 | 70.00th=[ 117], 80.00th=[ 123], 90.00th=[ 135], 95.00th=[ 145], 00:09:52.595 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 188], 99.95th=[ 212], 00:09:52.595 | 99.99th=[ 229] 00:09:52.595 bw ( KiB/s): min=12728, max=12728, per=100.00%, avg=12728.00, stdev= 0.00, samples=1 00:09:52.595 iops : min= 3182, max= 3182, avg=3182.00, stdev= 0.00, samples=1 00:09:52.595 lat (usec) : 100=18.48%, 250=81.43%, 500=0.08%, 750=0.02% 00:09:52.595 cpu : usr=2.50%, sys=8.50%, ctx=6251, majf=0, minf=5 00:09:52.595 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.595 issued rwts: total=3072,3179,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.595 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.595 00:09:52.595 Run status group 0 (all jobs): 00:09:52.595 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:52.595 WRITE: bw=12.4MiB/s (13.0MB/s), 12.4MiB/s-12.4MiB/s (13.0MB/s-13.0MB/s), io=12.4MiB (13.0MB), run=1001-1001msec 00:09:52.595 00:09:52.595 Disk stats (read/write): 00:09:52.595 nvme0n1: ios=2667/3072, merge=0/0, ticks=516/390, in_queue=906, util=91.31% 00:09:52.595 06:40:06 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:52.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:52.595 06:40:06 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:52.595 06:40:06 -- common/autotest_common.sh@1208 -- # local i=0 00:09:52.595 06:40:06 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:09:52.595 06:40:06 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:52.595 06:40:06 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:52.595 06:40:06 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:09:52.595 06:40:06 -- common/autotest_common.sh@1220 -- # return 0 00:09:52.595 06:40:06 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:52.595 06:40:06 -- target/nmic.sh@53 -- # nvmftestfini 00:09:52.595 06:40:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:52.595 06:40:06 -- nvmf/common.sh@116 -- # sync 00:09:52.595 06:40:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:52.595 06:40:06 -- nvmf/common.sh@119 -- # set +e 00:09:52.595 06:40:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:52.595 06:40:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:52.595 rmmod nvme_tcp 00:09:52.595 rmmod nvme_fabrics 00:09:52.595 rmmod nvme_keyring 00:09:52.595 06:40:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:52.595 06:40:06 -- nvmf/common.sh@123 -- # set -e 00:09:52.595 06:40:06 -- nvmf/common.sh@124 -- # return 0 00:09:52.595 06:40:06 -- nvmf/common.sh@477 -- # '[' -n 63188 ']' 00:09:52.595 06:40:06 -- nvmf/common.sh@478 -- # killprocess 63188 00:09:52.595 06:40:06 -- common/autotest_common.sh@936 -- # '[' -z 63188 ']' 00:09:52.595 06:40:06 -- common/autotest_common.sh@940 -- # kill -0 63188 00:09:52.595 06:40:06 -- common/autotest_common.sh@941 -- # uname 00:09:52.595 06:40:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:52.595 06:40:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63188 00:09:52.853 killing process with pid 63188 00:09:52.853 06:40:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:52.853 06:40:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:52.853 06:40:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63188' 00:09:52.853 06:40:06 -- common/autotest_common.sh@955 -- # kill 63188 00:09:52.853 06:40:06 -- common/autotest_common.sh@960 -- # wait 63188 00:09:52.853 06:40:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:52.853 06:40:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:52.853 06:40:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:52.853 06:40:06 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:52.853 06:40:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:52.853 06:40:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.854 06:40:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:52.854 06:40:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.854 06:40:06 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:52.854 00:09:52.854 real 0m5.833s 00:09:52.854 user 0m18.654s 00:09:52.854 sys 0m2.295s 00:09:52.854 06:40:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:52.854 06:40:06 -- common/autotest_common.sh@10 -- # set +x 00:09:52.854 ************************************ 00:09:52.854 END TEST nvmf_nmic 00:09:52.854 ************************************ 00:09:53.113 06:40:06 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:53.113 06:40:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:53.113 06:40:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:53.113 06:40:06 -- common/autotest_common.sh@10 -- # set +x 00:09:53.113 ************************************ 00:09:53.113 START TEST nvmf_fio_target 00:09:53.113 ************************************ 00:09:53.113 06:40:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:53.113 * Looking for test storage... 00:09:53.113 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:53.113 06:40:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:53.113 06:40:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:53.113 06:40:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:53.113 06:40:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:53.113 06:40:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:53.113 06:40:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:53.113 06:40:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:53.113 06:40:07 -- scripts/common.sh@335 -- # IFS=.-: 00:09:53.113 06:40:07 -- scripts/common.sh@335 -- # read -ra ver1 00:09:53.113 06:40:07 -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.113 06:40:07 -- scripts/common.sh@336 -- # read -ra ver2 00:09:53.113 06:40:07 -- scripts/common.sh@337 -- # local 'op=<' 00:09:53.113 06:40:07 -- scripts/common.sh@339 -- # ver1_l=2 00:09:53.113 06:40:07 -- scripts/common.sh@340 -- # ver2_l=1 00:09:53.113 06:40:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:53.113 06:40:07 -- scripts/common.sh@343 -- # case "$op" in 00:09:53.113 06:40:07 -- scripts/common.sh@344 -- # : 1 00:09:53.113 06:40:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:53.113 06:40:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.113 06:40:07 -- scripts/common.sh@364 -- # decimal 1 00:09:53.113 06:40:07 -- scripts/common.sh@352 -- # local d=1 00:09:53.113 06:40:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.113 06:40:07 -- scripts/common.sh@354 -- # echo 1 00:09:53.113 06:40:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:53.113 06:40:07 -- scripts/common.sh@365 -- # decimal 2 00:09:53.113 06:40:07 -- scripts/common.sh@352 -- # local d=2 00:09:53.113 06:40:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.113 06:40:07 -- scripts/common.sh@354 -- # echo 2 00:09:53.113 06:40:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:53.113 06:40:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:53.113 06:40:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:53.113 06:40:07 -- scripts/common.sh@367 -- # return 0 00:09:53.113 06:40:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.113 06:40:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:53.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.113 --rc genhtml_branch_coverage=1 00:09:53.113 --rc genhtml_function_coverage=1 00:09:53.113 --rc genhtml_legend=1 00:09:53.113 --rc geninfo_all_blocks=1 00:09:53.113 --rc geninfo_unexecuted_blocks=1 00:09:53.113 00:09:53.113 ' 00:09:53.113 06:40:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:53.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.113 --rc genhtml_branch_coverage=1 00:09:53.113 --rc genhtml_function_coverage=1 00:09:53.113 --rc genhtml_legend=1 00:09:53.113 --rc geninfo_all_blocks=1 00:09:53.113 --rc geninfo_unexecuted_blocks=1 00:09:53.113 00:09:53.113 ' 00:09:53.113 06:40:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:53.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.113 --rc genhtml_branch_coverage=1 00:09:53.113 --rc genhtml_function_coverage=1 00:09:53.113 --rc genhtml_legend=1 00:09:53.113 --rc geninfo_all_blocks=1 00:09:53.113 --rc geninfo_unexecuted_blocks=1 00:09:53.113 00:09:53.113 ' 00:09:53.113 06:40:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:53.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.113 --rc genhtml_branch_coverage=1 00:09:53.113 --rc genhtml_function_coverage=1 00:09:53.113 --rc genhtml_legend=1 00:09:53.113 --rc geninfo_all_blocks=1 00:09:53.113 --rc geninfo_unexecuted_blocks=1 00:09:53.113 00:09:53.113 ' 00:09:53.113 06:40:07 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:53.113 06:40:07 -- nvmf/common.sh@7 -- # uname -s 00:09:53.113 06:40:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.113 06:40:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.113 06:40:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.113 06:40:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.113 06:40:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.113 06:40:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.113 06:40:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.113 06:40:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.113 06:40:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.113 06:40:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.113 06:40:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 00:09:53.113 06:40:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=1897a557-42a7-4044-982a-fbab8b2b3e32 00:09:53.113 06:40:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.113 06:40:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.113 06:40:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:53.113 06:40:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:53.113 06:40:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.113 06:40:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.113 06:40:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.113 06:40:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.113 06:40:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.113 06:40:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.113 06:40:07 -- paths/export.sh@5 -- # export PATH 00:09:53.113 06:40:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.113 06:40:07 -- nvmf/common.sh@46 -- # : 0 00:09:53.113 06:40:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:53.113 06:40:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:53.113 06:40:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:53.113 06:40:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.113 06:40:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.113 06:40:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:53.113 06:40:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:53.113 06:40:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:53.113 06:40:07 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:53.113 06:40:07 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:53.113 06:40:07 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:53.114 06:40:07 -- target/fio.sh@16 -- # nvmftestinit 00:09:53.114 06:40:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:53.114 06:40:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.114 06:40:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:53.114 06:40:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:53.114 06:40:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:53.114 06:40:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.114 06:40:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:53.114 06:40:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.114 06:40:07 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:53.114 06:40:07 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:53.114 06:40:07 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:53.114 06:40:07 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:53.114 06:40:07 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:53.114 06:40:07 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:53.114 06:40:07 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:53.114 06:40:07 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:53.114 06:40:07 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:53.114 06:40:07 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:53.114 06:40:07 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:53.114 06:40:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:53.114 06:40:07 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:53.114 06:40:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:53.114 06:40:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:53.114 06:40:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:53.114 06:40:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:53.114 06:40:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:53.114 06:40:07 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:53.372 06:40:07 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:53.372 Cannot find device "nvmf_tgt_br" 00:09:53.372 06:40:07 -- nvmf/common.sh@154 -- # true 00:09:53.372 06:40:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:53.372 Cannot find device "nvmf_tgt_br2" 00:09:53.372 06:40:07 -- nvmf/common.sh@155 -- # true 00:09:53.373 06:40:07 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:53.373 06:40:07 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:53.373 Cannot find device "nvmf_tgt_br" 00:09:53.373 06:40:07 -- nvmf/common.sh@157 -- # true 00:09:53.373 06:40:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:53.373 Cannot find device "nvmf_tgt_br2" 00:09:53.373 06:40:07 -- nvmf/common.sh@158 -- # true 00:09:53.373 06:40:07 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:53.373 06:40:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:53.373 06:40:07 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:53.373 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:53.373 06:40:07 -- nvmf/common.sh@161 -- # true 00:09:53.373 06:40:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:53.373 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:53.373 06:40:07 -- nvmf/common.sh@162 -- # true 00:09:53.373 06:40:07 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:53.373 06:40:07 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:53.373 06:40:07 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:53.373 06:40:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:53.373 06:40:07 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:53.373 06:40:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:53.373 06:40:07 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:53.373 06:40:07 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:53.373 06:40:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:53.373 06:40:07 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:53.373 06:40:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:53.373 06:40:07 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:53.373 06:40:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:53.373 06:40:07 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:53.373 06:40:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:53.373 06:40:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:53.373 06:40:07 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:53.373 06:40:07 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:53.373 06:40:07 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:53.373 06:40:07 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:53.373 06:40:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:53.373 06:40:07 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:53.632 06:40:07 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:53.632 06:40:07 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:53.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:53.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:09:53.632 00:09:53.632 --- 10.0.0.2 ping statistics --- 00:09:53.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.632 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:53.632 06:40:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:53.632 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:53.632 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:09:53.632 00:09:53.632 --- 10.0.0.3 ping statistics --- 00:09:53.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.632 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:09:53.632 06:40:07 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:53.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:53.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:09:53.632 00:09:53.632 --- 10.0.0.1 ping statistics --- 00:09:53.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.632 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:09:53.632 06:40:07 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:53.632 06:40:07 -- nvmf/common.sh@421 -- # return 0 00:09:53.632 06:40:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:53.632 06:40:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:53.632 06:40:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:53.632 06:40:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:53.632 06:40:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:53.632 06:40:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:53.632 06:40:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:53.632 06:40:07 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:53.632 06:40:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:53.632 06:40:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:53.632 06:40:07 -- common/autotest_common.sh@10 -- # set +x 00:09:53.632 06:40:07 -- nvmf/common.sh@469 -- # nvmfpid=63464 00:09:53.632 06:40:07 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:53.632 06:40:07 -- nvmf/common.sh@470 -- # waitforlisten 63464 00:09:53.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.632 06:40:07 -- common/autotest_common.sh@829 -- # '[' -z 63464 ']' 00:09:53.632 06:40:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.632 06:40:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:53.632 06:40:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.632 06:40:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:53.632 06:40:07 -- common/autotest_common.sh@10 -- # set +x 00:09:53.632 [2024-12-14 06:40:07.451747] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:53.632 [2024-12-14 06:40:07.451826] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.632 [2024-12-14 06:40:07.584179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:53.891 [2024-12-14 06:40:07.640535] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:53.891 [2024-12-14 06:40:07.640862] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.891 [2024-12-14 06:40:07.641007] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.891 [2024-12-14 06:40:07.641149] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.891 [2024-12-14 06:40:07.641498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.891 [2024-12-14 06:40:07.641652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.891 [2024-12-14 06:40:07.641804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.891 [2024-12-14 06:40:07.641803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:54.458 06:40:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:54.458 06:40:08 -- common/autotest_common.sh@862 -- # return 0 00:09:54.458 06:40:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:54.458 06:40:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:54.458 06:40:08 -- common/autotest_common.sh@10 -- # set +x 00:09:54.717 06:40:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.717 06:40:08 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:54.717 [2024-12-14 06:40:08.675562] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:54.976 06:40:08 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:55.235 06:40:09 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:55.235 06:40:09 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:55.494 06:40:09 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:55.494 06:40:09 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:55.753 06:40:09 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:55.753 06:40:09 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:56.011 06:40:09 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:56.011 06:40:09 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:56.270 06:40:10 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:56.528 06:40:10 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:56.528 06:40:10 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:56.787 06:40:10 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:56.787 06:40:10 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:56.787 06:40:10 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:56.787 06:40:10 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:57.046 06:40:11 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:57.613 06:40:11 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:57.613 06:40:11 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:57.613 06:40:11 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:57.613 06:40:11 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:57.871 06:40:11 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:58.130 [2024-12-14 06:40:11.978883] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:58.130 06:40:11 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:58.389 06:40:12 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:58.648 06:40:12 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 --hostid=1897a557-42a7-4044-982a-fbab8b2b3e32 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:58.648 06:40:12 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:58.648 06:40:12 -- common/autotest_common.sh@1187 -- # local i=0 00:09:58.648 06:40:12 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:09:58.648 06:40:12 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:09:58.648 06:40:12 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:09:58.648 06:40:12 -- common/autotest_common.sh@1194 -- # sleep 2 00:10:01.219 06:40:14 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:10:01.219 06:40:14 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:10:01.219 06:40:14 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:10:01.219 06:40:14 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:10:01.219 06:40:14 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:10:01.219 06:40:14 -- common/autotest_common.sh@1197 -- # return 0 00:10:01.219 06:40:14 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:01.219 [global] 00:10:01.219 thread=1 00:10:01.219 invalidate=1 00:10:01.219 rw=write 00:10:01.219 time_based=1 00:10:01.219 runtime=1 00:10:01.219 ioengine=libaio 00:10:01.219 direct=1 00:10:01.219 bs=4096 00:10:01.219 iodepth=1 00:10:01.219 norandommap=0 00:10:01.219 numjobs=1 00:10:01.219 00:10:01.219 verify_dump=1 00:10:01.219 verify_backlog=512 00:10:01.219 verify_state_save=0 00:10:01.219 do_verify=1 00:10:01.219 verify=crc32c-intel 00:10:01.219 [job0] 00:10:01.219 filename=/dev/nvme0n1 00:10:01.219 [job1] 00:10:01.219 filename=/dev/nvme0n2 00:10:01.219 [job2] 00:10:01.219 filename=/dev/nvme0n3 00:10:01.219 [job3] 00:10:01.219 filename=/dev/nvme0n4 00:10:01.219 Could not set queue depth (nvme0n1) 00:10:01.219 Could not set queue depth (nvme0n2) 00:10:01.219 Could not set queue depth (nvme0n3) 00:10:01.219 Could not set queue depth (nvme0n4) 00:10:01.220 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:01.220 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:01.220 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:01.220 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:01.220 fio-3.35 00:10:01.220 Starting 4 threads 00:10:02.156 00:10:02.156 job0: (groupid=0, jobs=1): err= 0: pid=63648: Sat Dec 14 06:40:15 2024 00:10:02.156 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:02.156 slat (nsec): min=10297, max=39669, avg=13196.74, stdev=3084.19 00:10:02.156 clat (usec): min=124, max=563, avg=160.51, stdev=17.22 00:10:02.156 lat (usec): min=136, max=575, avg=173.70, stdev=17.88 00:10:02.156 clat percentiles (usec): 00:10:02.156 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:10:02.156 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 163], 00:10:02.156 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 188], 00:10:02.156 | 99.00th=[ 204], 99.50th=[ 208], 99.90th=[ 235], 99.95th=[ 322], 00:10:02.156 | 99.99th=[ 562] 00:10:02.156 write: IOPS=3233, BW=12.6MiB/s (13.2MB/s)(12.6MiB/1001msec); 0 zone resets 00:10:02.156 slat (usec): min=12, max=107, avg=19.30, stdev= 4.62 00:10:02.156 clat (usec): min=87, max=249, avg=121.81, stdev=13.98 00:10:02.156 lat (usec): min=104, max=356, avg=141.11, stdev=15.20 00:10:02.156 clat percentiles (usec): 00:10:02.156 | 1.00th=[ 94], 5.00th=[ 102], 10.00th=[ 106], 20.00th=[ 111], 00:10:02.156 | 30.00th=[ 115], 40.00th=[ 118], 50.00th=[ 122], 60.00th=[ 124], 00:10:02.156 | 70.00th=[ 128], 80.00th=[ 133], 90.00th=[ 141], 95.00th=[ 147], 00:10:02.156 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 184], 99.95th=[ 188], 00:10:02.156 | 99.99th=[ 249] 00:10:02.156 bw ( KiB/s): min=12392, max=12392, per=29.80%, avg=12392.00, stdev= 0.00, samples=1 00:10:02.156 iops : min= 3098, max= 3098, avg=3098.00, stdev= 0.00, samples=1 00:10:02.156 lat (usec) : 100=2.06%, 250=97.89%, 500=0.03%, 750=0.02% 00:10:02.156 cpu : usr=2.30%, sys=8.10%, ctx=6310, majf=0, minf=1 00:10:02.156 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:02.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.156 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.156 issued rwts: total=3072,3237,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.156 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:02.156 job1: (groupid=0, jobs=1): err= 0: pid=63649: Sat Dec 14 06:40:15 2024 00:10:02.156 read: IOPS=1908, BW=7632KiB/s (7816kB/s)(7640KiB/1001msec) 00:10:02.156 slat (nsec): min=11461, max=48119, avg=14863.87, stdev=4098.42 00:10:02.156 clat (usec): min=146, max=892, avg=268.59, stdev=54.12 00:10:02.156 lat (usec): min=159, max=911, avg=283.46, stdev=56.01 00:10:02.156 clat percentiles (usec): 00:10:02.156 | 1.00th=[ 192], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 243], 00:10:02.156 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 265], 00:10:02.156 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 297], 95.00th=[ 343], 00:10:02.156 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 603], 99.95th=[ 889], 00:10:02.156 | 99.99th=[ 889] 00:10:02.156 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:02.156 slat (usec): min=16, max=118, avg=21.74, stdev= 5.46 00:10:02.156 clat (usec): min=92, max=683, avg=198.73, stdev=37.35 00:10:02.156 lat (usec): min=110, max=705, avg=220.47, stdev=38.28 00:10:02.156 clat percentiles (usec): 00:10:02.156 | 1.00th=[ 103], 5.00th=[ 121], 10.00th=[ 165], 20.00th=[ 184], 00:10:02.156 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 208], 00:10:02.156 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 231], 95.00th=[ 243], 00:10:02.156 | 99.00th=[ 265], 99.50th=[ 322], 99.90th=[ 502], 99.95th=[ 562], 00:10:02.156 | 99.99th=[ 685] 00:10:02.156 bw ( KiB/s): min= 8192, max= 8192, per=19.70%, avg=8192.00, stdev= 0.00, samples=1 00:10:02.156 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:02.156 lat (usec) : 100=0.25%, 250=66.73%, 500=31.94%, 750=1.06%, 1000=0.03% 00:10:02.156 cpu : usr=1.50%, sys=5.80%, ctx=3959, majf=0, minf=9 00:10:02.156 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:02.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.156 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.156 issued rwts: total=1910,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.156 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:02.156 job2: (groupid=0, jobs=1): err= 0: pid=63650: Sat Dec 14 06:40:15 2024 00:10:02.156 read: IOPS=2668, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1001msec) 00:10:02.156 slat (nsec): min=10586, max=49333, avg=13472.55, stdev=2968.72 00:10:02.156 clat (usec): min=142, max=1880, avg=179.18, stdev=36.21 00:10:02.156 lat (usec): min=155, max=1894, avg=192.65, stdev=36.31 00:10:02.156 clat percentiles (usec): 00:10:02.156 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:10:02.156 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:10:02.156 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 206], 00:10:02.156 | 99.00th=[ 221], 99.50th=[ 229], 99.90th=[ 249], 99.95th=[ 437], 00:10:02.156 | 99.99th=[ 1876] 00:10:02.156 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:02.156 slat (usec): min=13, max=121, avg=19.71, stdev= 4.75 00:10:02.156 clat (usec): min=102, max=236, avg=135.36, stdev=14.01 00:10:02.156 lat (usec): min=120, max=358, avg=155.07, stdev=14.96 00:10:02.156 clat percentiles (usec): 00:10:02.156 | 1.00th=[ 112], 5.00th=[ 117], 10.00th=[ 120], 20.00th=[ 124], 00:10:02.156 | 30.00th=[ 128], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 137], 00:10:02.156 | 70.00th=[ 141], 80.00th=[ 147], 90.00th=[ 155], 95.00th=[ 161], 00:10:02.156 | 99.00th=[ 180], 99.50th=[ 186], 99.90th=[ 200], 99.95th=[ 208], 00:10:02.156 | 99.99th=[ 237] 00:10:02.156 bw ( KiB/s): min=12288, max=12288, per=29.55%, avg=12288.00, stdev= 0.00, samples=1 00:10:02.156 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:02.156 lat (usec) : 250=99.97%, 500=0.02% 00:10:02.156 lat (msec) : 2=0.02% 00:10:02.156 cpu : usr=2.30%, sys=7.40%, ctx=5743, majf=0, minf=19 00:10:02.156 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:02.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.156 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.156 issued rwts: total=2671,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.156 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:02.156 job3: (groupid=0, jobs=1): err= 0: pid=63651: Sat Dec 14 06:40:15 2024 00:10:02.156 read: IOPS=1853, BW=7413KiB/s (7590kB/s)(7420KiB/1001msec) 00:10:02.156 slat (nsec): min=11985, max=54730, avg=16012.34, stdev=4779.27 00:10:02.156 clat (usec): min=167, max=2377, avg=263.48, stdev=62.02 00:10:02.156 lat (usec): min=192, max=2394, avg=279.50, stdev=62.58 00:10:02.156 clat percentiles (usec): 00:10:02.156 | 1.00th=[ 212], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 241], 00:10:02.156 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 262], 00:10:02.156 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 314], 00:10:02.156 | 99.00th=[ 433], 99.50th=[ 469], 99.90th=[ 799], 99.95th=[ 2376], 00:10:02.156 | 99.99th=[ 2376] 00:10:02.156 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:02.156 slat (usec): min=16, max=122, avg=23.87, stdev= 7.06 00:10:02.156 clat (usec): min=108, max=804, avg=208.00, stdev=51.19 00:10:02.156 lat (usec): min=131, max=837, avg=231.86, stdev=52.76 00:10:02.156 clat percentiles (usec): 00:10:02.156 | 1.00th=[ 120], 5.00th=[ 135], 10.00th=[ 169], 20.00th=[ 184], 00:10:02.156 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 208], 00:10:02.156 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 243], 95.00th=[ 310], 00:10:02.156 | 99.00th=[ 392], 99.50th=[ 420], 99.90th=[ 750], 99.95th=[ 783], 00:10:02.156 | 99.99th=[ 807] 00:10:02.156 bw ( KiB/s): min= 8192, max= 8192, per=19.70%, avg=8192.00, stdev= 0.00, samples=1 00:10:02.156 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:02.156 lat (usec) : 250=66.28%, 500=33.49%, 750=0.13%, 1000=0.08% 00:10:02.156 lat (msec) : 4=0.03% 00:10:02.156 cpu : usr=1.70%, sys=6.10%, ctx=3903, majf=0, minf=9 00:10:02.156 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:02.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.156 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.156 issued rwts: total=1855,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.156 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:02.156 00:10:02.156 Run status group 0 (all jobs): 00:10:02.156 READ: bw=37.1MiB/s (38.9MB/s), 7413KiB/s-12.0MiB/s (7590kB/s-12.6MB/s), io=37.1MiB (38.9MB), run=1001-1001msec 00:10:02.156 WRITE: bw=40.6MiB/s (42.6MB/s), 8184KiB/s-12.6MiB/s (8380kB/s-13.2MB/s), io=40.6MiB (42.6MB), run=1001-1001msec 00:10:02.156 00:10:02.156 Disk stats (read/write): 00:10:02.156 nvme0n1: ios=2610/2817, merge=0/0, ticks=453/367, in_queue=820, util=87.17% 00:10:02.156 nvme0n2: ios=1573/1881, merge=0/0, ticks=439/390, in_queue=829, util=87.78% 00:10:02.156 nvme0n3: ios=2321/2560, merge=0/0, ticks=417/363, in_queue=780, util=89.04% 00:10:02.156 nvme0n4: ios=1536/1796, merge=0/0, ticks=407/399, in_queue=806, util=89.60% 00:10:02.156 06:40:16 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:02.156 [global] 00:10:02.156 thread=1 00:10:02.156 invalidate=1 00:10:02.156 rw=randwrite 00:10:02.156 time_based=1 00:10:02.156 runtime=1 00:10:02.156 ioengine=libaio 00:10:02.156 direct=1 00:10:02.156 bs=4096 00:10:02.156 iodepth=1 00:10:02.156 norandommap=0 00:10:02.156 numjobs=1 00:10:02.156 00:10:02.156 verify_dump=1 00:10:02.156 verify_backlog=512 00:10:02.156 verify_state_save=0 00:10:02.156 do_verify=1 00:10:02.156 verify=crc32c-intel 00:10:02.156 [job0] 00:10:02.156 filename=/dev/nvme0n1 00:10:02.156 [job1] 00:10:02.156 filename=/dev/nvme0n2 00:10:02.156 [job2] 00:10:02.156 filename=/dev/nvme0n3 00:10:02.156 [job3] 00:10:02.156 filename=/dev/nvme0n4 00:10:02.156 Could not set queue depth (nvme0n1) 00:10:02.156 Could not set queue depth (nvme0n2) 00:10:02.156 Could not set queue depth (nvme0n3) 00:10:02.156 Could not set queue depth (nvme0n4) 00:10:02.416 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.416 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.416 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.416 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.416 fio-3.35 00:10:02.416 Starting 4 threads 00:10:03.795 00:10:03.795 job0: (groupid=0, jobs=1): err= 0: pid=63710: Sat Dec 14 06:40:17 2024 00:10:03.795 read: IOPS=2959, BW=11.6MiB/s (12.1MB/s)(11.6MiB/1001msec) 00:10:03.795 slat (nsec): min=10232, max=78904, avg=14001.22, stdev=5518.57 00:10:03.795 clat (usec): min=126, max=2625, avg=165.73, stdev=48.33 00:10:03.795 lat (usec): min=138, max=2639, avg=179.73, stdev=48.82 00:10:03.795 clat percentiles (usec): 00:10:03.795 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:10:03.795 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 167], 00:10:03.795 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 194], 00:10:03.795 | 99.00th=[ 210], 99.50th=[ 217], 99.90th=[ 330], 99.95th=[ 494], 00:10:03.795 | 99.99th=[ 2638] 00:10:03.795 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:03.795 slat (usec): min=12, max=109, avg=20.33, stdev= 7.34 00:10:03.795 clat (usec): min=93, max=617, avg=128.57, stdev=18.99 00:10:03.795 lat (usec): min=109, max=635, avg=148.90, stdev=20.41 00:10:03.795 clat percentiles (usec): 00:10:03.795 | 1.00th=[ 100], 5.00th=[ 109], 10.00th=[ 113], 20.00th=[ 117], 00:10:03.795 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 127], 60.00th=[ 130], 00:10:03.795 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 149], 95.00th=[ 157], 00:10:03.795 | 99.00th=[ 174], 99.50th=[ 180], 99.90th=[ 243], 99.95th=[ 562], 00:10:03.795 | 99.99th=[ 619] 00:10:03.795 bw ( KiB/s): min=12288, max=12288, per=26.17%, avg=12288.00, stdev= 0.00, samples=1 00:10:03.795 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:03.795 lat (usec) : 100=0.50%, 250=99.37%, 500=0.08%, 750=0.03% 00:10:03.795 lat (msec) : 4=0.02% 00:10:03.795 cpu : usr=2.40%, sys=7.90%, ctx=6034, majf=0, minf=11 00:10:03.795 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.795 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.795 issued rwts: total=2962,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.795 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.795 job1: (groupid=0, jobs=1): err= 0: pid=63711: Sat Dec 14 06:40:17 2024 00:10:03.795 read: IOPS=2903, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1001msec) 00:10:03.795 slat (nsec): min=11118, max=60190, avg=13827.64, stdev=3971.34 00:10:03.795 clat (usec): min=124, max=290, avg=166.21, stdev=15.25 00:10:03.795 lat (usec): min=136, max=303, avg=180.03, stdev=15.70 00:10:03.795 clat percentiles (usec): 00:10:03.795 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:10:03.795 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 169], 00:10:03.795 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 188], 95.00th=[ 194], 00:10:03.795 | 99.00th=[ 208], 99.50th=[ 210], 99.90th=[ 227], 99.95th=[ 241], 00:10:03.795 | 99.99th=[ 293] 00:10:03.795 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:03.795 slat (nsec): min=12940, max=97403, avg=20953.77, stdev=6937.71 00:10:03.795 clat (usec): min=87, max=1334, avg=130.84, stdev=28.26 00:10:03.795 lat (usec): min=104, max=1363, avg=151.79, stdev=29.31 00:10:03.795 clat percentiles (usec): 00:10:03.795 | 1.00th=[ 101], 5.00th=[ 110], 10.00th=[ 115], 20.00th=[ 119], 00:10:03.795 | 30.00th=[ 123], 40.00th=[ 126], 50.00th=[ 129], 60.00th=[ 133], 00:10:03.795 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 151], 95.00th=[ 159], 00:10:03.795 | 99.00th=[ 174], 99.50th=[ 182], 99.90th=[ 277], 99.95th=[ 603], 00:10:03.795 | 99.99th=[ 1336] 00:10:03.795 bw ( KiB/s): min=12288, max=12288, per=26.17%, avg=12288.00, stdev= 0.00, samples=1 00:10:03.795 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:03.795 lat (usec) : 100=0.33%, 250=99.58%, 500=0.05%, 750=0.02% 00:10:03.795 lat (msec) : 2=0.02% 00:10:03.795 cpu : usr=1.50%, sys=9.00%, ctx=5978, majf=0, minf=13 00:10:03.795 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.795 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.795 issued rwts: total=2906,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.795 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.795 job2: (groupid=0, jobs=1): err= 0: pid=63712: Sat Dec 14 06:40:17 2024 00:10:03.795 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:03.795 slat (usec): min=9, max=107, avg=14.63, stdev= 4.81 00:10:03.795 clat (usec): min=123, max=4350, avg=192.39, stdev=115.58 00:10:03.795 lat (usec): min=148, max=4369, avg=207.02, stdev=115.96 00:10:03.795 clat percentiles (usec): 00:10:03.795 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 163], 00:10:03.795 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 180], 00:10:03.795 | 70.00th=[ 188], 80.00th=[ 200], 90.00th=[ 262], 95.00th=[ 281], 00:10:03.795 | 99.00th=[ 326], 99.50th=[ 338], 99.90th=[ 1565], 99.95th=[ 3425], 00:10:03.795 | 99.99th=[ 4359] 00:10:03.795 write: IOPS=2742, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1001msec); 0 zone resets 00:10:03.795 slat (usec): min=9, max=111, avg=22.24, stdev= 7.26 00:10:03.795 clat (usec): min=106, max=1056, avg=145.53, stdev=37.00 00:10:03.795 lat (usec): min=124, max=1077, avg=167.78, stdev=37.15 00:10:03.795 clat percentiles (usec): 00:10:03.795 | 1.00th=[ 114], 5.00th=[ 120], 10.00th=[ 123], 20.00th=[ 128], 00:10:03.795 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 143], 00:10:03.795 | 70.00th=[ 147], 80.00th=[ 155], 90.00th=[ 172], 95.00th=[ 204], 00:10:03.795 | 99.00th=[ 258], 99.50th=[ 293], 99.90th=[ 603], 99.95th=[ 742], 00:10:03.795 | 99.99th=[ 1057] 00:10:03.795 bw ( KiB/s): min=12288, max=12288, per=26.17%, avg=12288.00, stdev= 0.00, samples=1 00:10:03.795 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:03.795 lat (usec) : 250=93.31%, 500=6.56%, 750=0.04%, 1000=0.02% 00:10:03.795 lat (msec) : 2=0.04%, 4=0.02%, 10=0.02% 00:10:03.795 cpu : usr=2.50%, sys=7.60%, ctx=5306, majf=0, minf=11 00:10:03.795 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.795 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.795 issued rwts: total=2560,2745,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.795 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.795 job3: (groupid=0, jobs=1): err= 0: pid=63713: Sat Dec 14 06:40:17 2024 00:10:03.795 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:03.795 slat (nsec): min=7855, max=38602, avg=12357.71, stdev=3059.93 00:10:03.795 clat (usec): min=138, max=418, avg=186.03, stdev=38.32 00:10:03.795 lat (usec): min=152, max=428, avg=198.39, stdev=37.79 00:10:03.795 clat percentiles (usec): 00:10:03.795 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 163], 00:10:03.795 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 180], 00:10:03.795 | 70.00th=[ 186], 80.00th=[ 194], 90.00th=[ 251], 95.00th=[ 281], 00:10:03.795 | 99.00th=[ 322], 99.50th=[ 338], 99.90th=[ 355], 99.95th=[ 355], 00:10:03.795 | 99.99th=[ 420] 00:10:03.795 write: IOPS=2860, BW=11.2MiB/s (11.7MB/s)(11.2MiB/1001msec); 0 zone resets 00:10:03.795 slat (usec): min=10, max=108, avg=19.14, stdev= 5.26 00:10:03.795 clat (usec): min=87, max=1719, avg=149.83, stdev=50.97 00:10:03.795 lat (usec): min=126, max=1737, avg=168.97, stdev=51.20 00:10:03.795 clat percentiles (usec): 00:10:03.795 | 1.00th=[ 117], 5.00th=[ 122], 10.00th=[ 125], 20.00th=[ 130], 00:10:03.795 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 145], 00:10:03.795 | 70.00th=[ 149], 80.00th=[ 159], 90.00th=[ 188], 95.00th=[ 223], 00:10:03.795 | 99.00th=[ 265], 99.50th=[ 281], 99.90th=[ 570], 99.95th=[ 1614], 00:10:03.795 | 99.99th=[ 1713] 00:10:03.795 bw ( KiB/s): min=12288, max=12288, per=26.17%, avg=12288.00, stdev= 0.00, samples=1 00:10:03.795 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:03.795 lat (usec) : 100=0.02%, 250=94.08%, 500=5.85%, 750=0.02% 00:10:03.795 lat (msec) : 2=0.04% 00:10:03.795 cpu : usr=1.70%, sys=7.20%, ctx=5424, majf=0, minf=13 00:10:03.795 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:03.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.795 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.795 issued rwts: total=2560,2863,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.795 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:03.795 00:10:03.795 Run status group 0 (all jobs): 00:10:03.795 READ: bw=42.9MiB/s (45.0MB/s), 9.99MiB/s-11.6MiB/s (10.5MB/s-12.1MB/s), io=42.9MiB (45.0MB), run=1001-1001msec 00:10:03.795 WRITE: bw=45.9MiB/s (48.1MB/s), 10.7MiB/s-12.0MiB/s (11.2MB/s-12.6MB/s), io=45.9MiB (48.1MB), run=1001-1001msec 00:10:03.795 00:10:03.795 Disk stats (read/write): 00:10:03.795 nvme0n1: ios=2609/2702, merge=0/0, ticks=467/356, in_queue=823, util=88.06% 00:10:03.795 nvme0n2: ios=2592/2560, merge=0/0, ticks=463/356, in_queue=819, util=88.52% 00:10:03.795 nvme0n3: ios=2220/2560, merge=0/0, ticks=393/371, in_queue=764, util=88.62% 00:10:03.795 nvme0n4: ios=2335/2560, merge=0/0, ticks=416/384, in_queue=800, util=89.80% 00:10:03.795 06:40:17 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:03.795 [global] 00:10:03.795 thread=1 00:10:03.795 invalidate=1 00:10:03.795 rw=write 00:10:03.795 time_based=1 00:10:03.795 runtime=1 00:10:03.795 ioengine=libaio 00:10:03.795 direct=1 00:10:03.795 bs=4096 00:10:03.795 iodepth=128 00:10:03.795 norandommap=0 00:10:03.795 numjobs=1 00:10:03.795 00:10:03.795 verify_dump=1 00:10:03.795 verify_backlog=512 00:10:03.795 verify_state_save=0 00:10:03.795 do_verify=1 00:10:03.795 verify=crc32c-intel 00:10:03.795 [job0] 00:10:03.795 filename=/dev/nvme0n1 00:10:03.795 [job1] 00:10:03.795 filename=/dev/nvme0n2 00:10:03.795 [job2] 00:10:03.795 filename=/dev/nvme0n3 00:10:03.795 [job3] 00:10:03.795 filename=/dev/nvme0n4 00:10:03.795 Could not set queue depth (nvme0n1) 00:10:03.795 Could not set queue depth (nvme0n2) 00:10:03.795 Could not set queue depth (nvme0n3) 00:10:03.795 Could not set queue depth (nvme0n4) 00:10:03.795 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:03.796 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:03.796 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:03.796 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:03.796 fio-3.35 00:10:03.796 Starting 4 threads 00:10:04.732 00:10:04.732 job0: (groupid=0, jobs=1): err= 0: pid=63773: Sat Dec 14 06:40:18 2024 00:10:04.732 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:10:04.732 slat (usec): min=2, max=6633, avg=165.11, stdev=656.06 00:10:04.732 clat (usec): min=12697, max=28746, avg=20771.54, stdev=2777.28 00:10:04.732 lat (usec): min=12712, max=30727, avg=20936.66, stdev=2791.62 00:10:04.732 clat percentiles (usec): 00:10:04.732 | 1.00th=[14484], 5.00th=[16188], 10.00th=[17171], 20.00th=[18482], 00:10:04.732 | 30.00th=[19530], 40.00th=[20055], 50.00th=[20579], 60.00th=[21365], 00:10:04.732 | 70.00th=[22152], 80.00th=[23200], 90.00th=[24511], 95.00th=[25560], 00:10:04.732 | 99.00th=[27395], 99.50th=[27919], 99.90th=[28443], 99.95th=[28705], 00:10:04.732 | 99.99th=[28705] 00:10:04.732 write: IOPS=3543, BW=13.8MiB/s (14.5MB/s)(13.9MiB/1003msec); 0 zone resets 00:10:04.732 slat (usec): min=7, max=6614, avg=133.00, stdev=579.69 00:10:04.732 clat (usec): min=857, max=28452, avg=17692.04, stdev=3605.09 00:10:04.732 lat (usec): min=5306, max=28460, avg=17825.04, stdev=3609.44 00:10:04.732 clat percentiles (usec): 00:10:04.732 | 1.00th=[ 9372], 5.00th=[12911], 10.00th=[13829], 20.00th=[14484], 00:10:04.732 | 30.00th=[15401], 40.00th=[16450], 50.00th=[17171], 60.00th=[18220], 00:10:04.732 | 70.00th=[19530], 80.00th=[20579], 90.00th=[22676], 95.00th=[24511], 00:10:04.732 | 99.00th=[26084], 99.50th=[26346], 99.90th=[27132], 99.95th=[27395], 00:10:04.732 | 99.99th=[28443] 00:10:04.732 bw ( KiB/s): min=13365, max=14016, per=25.31%, avg=13690.50, stdev=460.33, samples=2 00:10:04.732 iops : min= 3341, max= 3504, avg=3422.50, stdev=115.26, samples=2 00:10:04.732 lat (usec) : 1000=0.02% 00:10:04.732 lat (msec) : 10=0.68%, 20=58.47%, 50=40.84% 00:10:04.732 cpu : usr=2.69%, sys=7.78%, ctx=800, majf=0, minf=13 00:10:04.732 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:04.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.733 issued rwts: total=3072,3554,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.733 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.733 job1: (groupid=0, jobs=1): err= 0: pid=63774: Sat Dec 14 06:40:18 2024 00:10:04.733 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:10:04.733 slat (usec): min=6, max=4802, avg=146.44, stdev=723.69 00:10:04.733 clat (usec): min=14529, max=20828, avg=19379.81, stdev=911.41 00:10:04.733 lat (usec): min=18477, max=20844, avg=19526.25, stdev=556.79 00:10:04.733 clat percentiles (usec): 00:10:04.733 | 1.00th=[15139], 5.00th=[18482], 10.00th=[18744], 20.00th=[19006], 00:10:04.733 | 30.00th=[19006], 40.00th=[19268], 50.00th=[19530], 60.00th=[19530], 00:10:04.733 | 70.00th=[19792], 80.00th=[20055], 90.00th=[20317], 95.00th=[20579], 00:10:04.733 | 99.00th=[20841], 99.50th=[20841], 99.90th=[20841], 99.95th=[20841], 00:10:04.733 | 99.99th=[20841] 00:10:04.733 write: IOPS=3446, BW=13.5MiB/s (14.1MB/s)(13.5MiB/1003msec); 0 zone resets 00:10:04.733 slat (usec): min=18, max=4818, avg=151.04, stdev=694.61 00:10:04.733 clat (usec): min=401, max=21093, avg=19313.15, stdev=1953.63 00:10:04.733 lat (usec): min=4559, max=21120, avg=19464.19, stdev=1826.42 00:10:04.733 clat percentiles (usec): 00:10:04.733 | 1.00th=[ 9372], 5.00th=[15926], 10.00th=[18744], 20.00th=[19268], 00:10:04.733 | 30.00th=[19268], 40.00th=[19530], 50.00th=[19530], 60.00th=[19792], 00:10:04.733 | 70.00th=[19792], 80.00th=[20055], 90.00th=[20317], 95.00th=[20579], 00:10:04.733 | 99.00th=[20841], 99.50th=[21103], 99.90th=[21103], 99.95th=[21103], 00:10:04.733 | 99.99th=[21103] 00:10:04.733 bw ( KiB/s): min=13037, max=13576, per=24.60%, avg=13306.50, stdev=381.13, samples=2 00:10:04.733 iops : min= 3259, max= 3394, avg=3326.50, stdev=95.46, samples=2 00:10:04.733 lat (usec) : 500=0.02% 00:10:04.733 lat (msec) : 10=0.86%, 20=75.19%, 50=23.94% 00:10:04.733 cpu : usr=2.99%, sys=10.88%, ctx=205, majf=0, minf=12 00:10:04.733 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:04.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.733 issued rwts: total=3072,3457,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.733 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.733 job2: (groupid=0, jobs=1): err= 0: pid=63775: Sat Dec 14 06:40:18 2024 00:10:04.733 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:04.733 slat (usec): min=2, max=6998, avg=166.14, stdev=649.76 00:10:04.733 clat (usec): min=9787, max=29963, avg=21303.74, stdev=2710.37 00:10:04.733 lat (usec): min=9797, max=31131, avg=21469.88, stdev=2727.47 00:10:04.733 clat percentiles (usec): 00:10:04.733 | 1.00th=[15664], 5.00th=[17695], 10.00th=[18220], 20.00th=[19530], 00:10:04.733 | 30.00th=[19792], 40.00th=[20317], 50.00th=[20841], 60.00th=[21627], 00:10:04.733 | 70.00th=[22676], 80.00th=[23725], 90.00th=[24773], 95.00th=[25560], 00:10:04.733 | 99.00th=[27919], 99.50th=[28443], 99.90th=[29492], 99.95th=[29492], 00:10:04.733 | 99.99th=[30016] 00:10:04.733 write: IOPS=3093, BW=12.1MiB/s (12.7MB/s)(12.1MiB/1001msec); 0 zone resets 00:10:04.733 slat (usec): min=5, max=7999, avg=151.65, stdev=607.09 00:10:04.733 clat (usec): min=746, max=28936, avg=19523.23, stdev=3471.86 00:10:04.733 lat (usec): min=764, max=28954, avg=19674.88, stdev=3469.08 00:10:04.733 clat percentiles (usec): 00:10:04.733 | 1.00th=[11863], 5.00th=[14484], 10.00th=[15664], 20.00th=[16712], 00:10:04.733 | 30.00th=[17695], 40.00th=[18744], 50.00th=[19530], 60.00th=[20055], 00:10:04.733 | 70.00th=[21103], 80.00th=[22152], 90.00th=[23462], 95.00th=[24511], 00:10:04.733 | 99.00th=[28705], 99.50th=[28967], 99.90th=[28967], 99.95th=[28967], 00:10:04.733 | 99.99th=[28967] 00:10:04.733 bw ( KiB/s): min=12288, max=12312, per=22.74%, avg=12300.00, stdev=16.97, samples=2 00:10:04.733 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:10:04.733 lat (usec) : 750=0.02%, 1000=0.13% 00:10:04.733 lat (msec) : 10=0.52%, 20=47.25%, 50=52.08% 00:10:04.733 cpu : usr=2.40%, sys=7.20%, ctx=885, majf=0, minf=15 00:10:04.733 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:04.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.733 issued rwts: total=3072,3097,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.733 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.733 job3: (groupid=0, jobs=1): err= 0: pid=63776: Sat Dec 14 06:40:18 2024 00:10:04.733 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:10:04.733 slat (usec): min=9, max=4987, avg=146.39, stdev=726.72 00:10:04.733 clat (usec): min=14434, max=20942, avg=19383.13, stdev=912.70 00:10:04.733 lat (usec): min=18515, max=20957, avg=19529.51, stdev=553.85 00:10:04.733 clat percentiles (usec): 00:10:04.733 | 1.00th=[15008], 5.00th=[18482], 10.00th=[18744], 20.00th=[19006], 00:10:04.733 | 30.00th=[19006], 40.00th=[19268], 50.00th=[19530], 60.00th=[19530], 00:10:04.733 | 70.00th=[19792], 80.00th=[20055], 90.00th=[20317], 95.00th=[20579], 00:10:04.733 | 99.00th=[20841], 99.50th=[20841], 99.90th=[20841], 99.95th=[20841], 00:10:04.733 | 99.99th=[20841] 00:10:04.733 write: IOPS=3450, BW=13.5MiB/s (14.1MB/s)(13.5MiB/1002msec); 0 zone resets 00:10:04.733 slat (usec): min=13, max=4732, avg=151.13, stdev=699.96 00:10:04.733 clat (usec): min=362, max=20983, avg=19304.34, stdev=1955.11 00:10:04.733 lat (usec): min=4610, max=21055, avg=19455.47, stdev=1826.34 00:10:04.733 clat percentiles (usec): 00:10:04.733 | 1.00th=[ 9372], 5.00th=[15926], 10.00th=[18744], 20.00th=[19268], 00:10:04.733 | 30.00th=[19530], 40.00th=[19530], 50.00th=[19530], 60.00th=[19792], 00:10:04.733 | 70.00th=[19792], 80.00th=[20055], 90.00th=[20317], 95.00th=[20579], 00:10:04.733 | 99.00th=[20841], 99.50th=[20841], 99.90th=[20841], 99.95th=[21103], 00:10:04.733 | 99.99th=[21103] 00:10:04.733 bw ( KiB/s): min=13064, max=13576, per=24.62%, avg=13320.00, stdev=362.04, samples=2 00:10:04.733 iops : min= 3266, max= 3394, avg=3330.00, stdev=90.51, samples=2 00:10:04.733 lat (usec) : 500=0.02% 00:10:04.733 lat (msec) : 10=0.86%, 20=75.77%, 50=23.36% 00:10:04.733 cpu : usr=4.10%, sys=9.29%, ctx=205, majf=0, minf=7 00:10:04.733 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:04.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.733 issued rwts: total=3072,3457,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.733 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.733 00:10:04.733 Run status group 0 (all jobs): 00:10:04.733 READ: bw=47.9MiB/s (50.2MB/s), 12.0MiB/s-12.0MiB/s (12.5MB/s-12.6MB/s), io=48.0MiB (50.3MB), run=1001-1003msec 00:10:04.733 WRITE: bw=52.8MiB/s (55.4MB/s), 12.1MiB/s-13.8MiB/s (12.7MB/s-14.5MB/s), io=53.0MiB (55.6MB), run=1001-1003msec 00:10:04.733 00:10:04.733 Disk stats (read/write): 00:10:04.733 nvme0n1: ios=2679/3072, merge=0/0, ticks=17764/16244, in_queue=34008, util=88.48% 00:10:04.733 nvme0n2: ios=2609/3072, merge=0/0, ticks=11301/13709, in_queue=25010, util=89.09% 00:10:04.733 nvme0n3: ios=2587/2735, merge=0/0, ticks=17217/16493, in_queue=33710, util=89.19% 00:10:04.733 nvme0n4: ios=2560/3072, merge=0/0, ticks=11296/13799, in_queue=25095, util=89.65% 00:10:04.733 06:40:18 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:04.992 [global] 00:10:04.992 thread=1 00:10:04.992 invalidate=1 00:10:04.992 rw=randwrite 00:10:04.992 time_based=1 00:10:04.992 runtime=1 00:10:04.992 ioengine=libaio 00:10:04.992 direct=1 00:10:04.992 bs=4096 00:10:04.992 iodepth=128 00:10:04.992 norandommap=0 00:10:04.992 numjobs=1 00:10:04.992 00:10:04.992 verify_dump=1 00:10:04.992 verify_backlog=512 00:10:04.992 verify_state_save=0 00:10:04.992 do_verify=1 00:10:04.992 verify=crc32c-intel 00:10:04.992 [job0] 00:10:04.992 filename=/dev/nvme0n1 00:10:04.992 [job1] 00:10:04.992 filename=/dev/nvme0n2 00:10:04.992 [job2] 00:10:04.992 filename=/dev/nvme0n3 00:10:04.992 [job3] 00:10:04.992 filename=/dev/nvme0n4 00:10:04.992 Could not set queue depth (nvme0n1) 00:10:04.992 Could not set queue depth (nvme0n2) 00:10:04.992 Could not set queue depth (nvme0n3) 00:10:04.992 Could not set queue depth (nvme0n4) 00:10:04.992 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:04.992 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:04.992 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:04.992 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:04.992 fio-3.35 00:10:04.992 Starting 4 threads 00:10:06.370 00:10:06.370 job0: (groupid=0, jobs=1): err= 0: pid=63829: Sat Dec 14 06:40:20 2024 00:10:06.370 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:10:06.370 slat (usec): min=5, max=10373, avg=81.76, stdev=477.27 00:10:06.370 clat (usec): min=6477, max=21592, avg=11362.42, stdev=1615.12 00:10:06.370 lat (usec): min=6492, max=21929, avg=11444.17, stdev=1621.50 00:10:06.370 clat percentiles (usec): 00:10:06.370 | 1.00th=[ 7111], 5.00th=[ 9765], 10.00th=[10421], 20.00th=[10683], 00:10:06.370 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:10:06.370 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[14222], 00:10:06.370 | 99.00th=[17957], 99.50th=[20055], 99.90th=[20579], 99.95th=[20579], 00:10:06.370 | 99.99th=[21627] 00:10:06.370 write: IOPS=5685, BW=22.2MiB/s (23.3MB/s)(22.3MiB/1002msec); 0 zone resets 00:10:06.370 slat (usec): min=5, max=9343, avg=87.27, stdev=485.81 00:10:06.370 clat (usec): min=148, max=18374, avg=11051.55, stdev=1622.25 00:10:06.370 lat (usec): min=2035, max=18398, avg=11138.81, stdev=1578.05 00:10:06.370 clat percentiles (usec): 00:10:06.370 | 1.00th=[ 5604], 5.00th=[ 8979], 10.00th=[10028], 20.00th=[10421], 00:10:06.370 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:10:06.370 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[13829], 00:10:06.370 | 99.00th=[18220], 99.50th=[18220], 99.90th=[18482], 99.95th=[18482], 00:10:06.370 | 99.99th=[18482] 00:10:06.370 bw ( KiB/s): min=20480, max=24576, per=34.01%, avg=22528.00, stdev=2896.31, samples=2 00:10:06.370 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:10:06.370 lat (usec) : 250=0.01% 00:10:06.370 lat (msec) : 4=0.28%, 10=6.57%, 20=92.87%, 50=0.27% 00:10:06.370 cpu : usr=5.39%, sys=13.99%, ctx=296, majf=0, minf=11 00:10:06.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:06.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:06.370 issued rwts: total=5632,5697,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.370 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:06.370 job1: (groupid=0, jobs=1): err= 0: pid=63830: Sat Dec 14 06:40:20 2024 00:10:06.370 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:10:06.370 slat (usec): min=4, max=4715, avg=84.92, stdev=431.66 00:10:06.370 clat (usec): min=6722, max=16538, avg=11071.83, stdev=1134.79 00:10:06.370 lat (usec): min=6743, max=19466, avg=11156.75, stdev=1177.89 00:10:06.370 clat percentiles (usec): 00:10:06.370 | 1.00th=[ 7701], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[10290], 00:10:06.370 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:10:06.370 | 70.00th=[11338], 80.00th=[11600], 90.00th=[12387], 95.00th=[12911], 00:10:06.370 | 99.00th=[15008], 99.50th=[15270], 99.90th=[15926], 99.95th=[15926], 00:10:06.370 | 99.99th=[16581] 00:10:06.370 write: IOPS=5780, BW=22.6MiB/s (23.7MB/s)(22.6MiB/1002msec); 0 zone resets 00:10:06.370 slat (usec): min=10, max=4710, avg=82.48, stdev=439.42 00:10:06.370 clat (usec): min=286, max=16674, avg=11108.08, stdev=1331.95 00:10:06.370 lat (usec): min=4229, max=16727, avg=11190.57, stdev=1393.02 00:10:06.370 clat percentiles (usec): 00:10:06.370 | 1.00th=[ 5407], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10552], 00:10:06.370 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:10:06.370 | 70.00th=[11469], 80.00th=[11863], 90.00th=[12387], 95.00th=[13042], 00:10:06.370 | 99.00th=[15139], 99.50th=[15664], 99.90th=[16188], 99.95th=[16712], 00:10:06.370 | 99.99th=[16712] 00:10:06.370 bw ( KiB/s): min=20968, max=24376, per=34.23%, avg=22672.00, stdev=2409.82, samples=2 00:10:06.370 iops : min= 5242, max= 6094, avg=5668.00, stdev=602.45, samples=2 00:10:06.370 lat (usec) : 500=0.01% 00:10:06.370 lat (msec) : 10=10.67%, 20=89.32% 00:10:06.370 cpu : usr=5.79%, sys=14.09%, ctx=428, majf=0, minf=13 00:10:06.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:06.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:06.370 issued rwts: total=5632,5792,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.370 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:06.370 job2: (groupid=0, jobs=1): err= 0: pid=63831: Sat Dec 14 06:40:20 2024 00:10:06.370 read: IOPS=2485, BW=9940KiB/s (10.2MB/s)(9960KiB/1002msec) 00:10:06.370 slat (usec): min=4, max=7994, avg=200.46, stdev=796.93 00:10:06.370 clat (usec): min=566, max=37144, avg=24605.29, stdev=4289.29 00:10:06.370 lat (usec): min=1656, max=37196, avg=24805.75, stdev=4315.04 00:10:06.370 clat percentiles (usec): 00:10:06.370 | 1.00th=[ 6325], 5.00th=[19006], 10.00th=[20841], 20.00th=[22676], 00:10:06.370 | 30.00th=[23725], 40.00th=[24511], 50.00th=[25035], 60.00th=[25035], 00:10:06.370 | 70.00th=[25822], 80.00th=[26870], 90.00th=[29754], 95.00th=[31327], 00:10:06.370 | 99.00th=[33817], 99.50th=[34341], 99.90th=[36963], 99.95th=[36963], 00:10:06.370 | 99.99th=[36963] 00:10:06.371 write: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec); 0 zone resets 00:10:06.371 slat (usec): min=5, max=6609, avg=188.49, stdev=838.04 00:10:06.371 clat (usec): min=14237, max=40597, avg=25198.42, stdev=5041.29 00:10:06.371 lat (usec): min=14264, max=40615, avg=25386.91, stdev=5057.68 00:10:06.371 clat percentiles (usec): 00:10:06.371 | 1.00th=[16581], 5.00th=[18482], 10.00th=[18744], 20.00th=[19792], 00:10:06.371 | 30.00th=[22676], 40.00th=[23987], 50.00th=[25035], 60.00th=[26346], 00:10:06.371 | 70.00th=[26870], 80.00th=[28705], 90.00th=[31589], 95.00th=[35914], 00:10:06.371 | 99.00th=[38011], 99.50th=[38011], 99.90th=[40633], 99.95th=[40633], 00:10:06.371 | 99.99th=[40633] 00:10:06.371 bw ( KiB/s): min= 8536, max=11967, per=15.48%, avg=10251.50, stdev=2426.08, samples=2 00:10:06.371 iops : min= 2134, max= 2991, avg=2562.50, stdev=605.99, samples=2 00:10:06.371 lat (usec) : 750=0.02% 00:10:06.371 lat (msec) : 2=0.20%, 4=0.16%, 10=0.48%, 20=13.17%, 50=85.98% 00:10:06.371 cpu : usr=2.20%, sys=6.29%, ctx=561, majf=0, minf=9 00:10:06.371 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:06.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:06.371 issued rwts: total=2490,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.371 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:06.371 job3: (groupid=0, jobs=1): err= 0: pid=63832: Sat Dec 14 06:40:20 2024 00:10:06.371 read: IOPS=2508, BW=9.80MiB/s (10.3MB/s)(9.83MiB/1003msec) 00:10:06.371 slat (usec): min=5, max=8448, avg=198.94, stdev=797.28 00:10:06.371 clat (usec): min=1345, max=35823, avg=25058.13, stdev=3611.33 00:10:06.371 lat (usec): min=2787, max=36206, avg=25257.07, stdev=3614.44 00:10:06.371 clat percentiles (usec): 00:10:06.371 | 1.00th=[ 9634], 5.00th=[20055], 10.00th=[22152], 20.00th=[23725], 00:10:06.371 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25297], 60.00th=[25560], 00:10:06.371 | 70.00th=[26084], 80.00th=[26870], 90.00th=[28443], 95.00th=[30802], 00:10:06.371 | 99.00th=[33162], 99.50th=[33424], 99.90th=[35914], 99.95th=[35914], 00:10:06.371 | 99.99th=[35914] 00:10:06.371 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:10:06.371 slat (usec): min=6, max=7133, avg=188.27, stdev=830.22 00:10:06.371 clat (usec): min=13588, max=38206, avg=24702.48, stdev=5031.87 00:10:06.371 lat (usec): min=13601, max=38227, avg=24890.75, stdev=5046.82 00:10:06.371 clat percentiles (usec): 00:10:06.371 | 1.00th=[16057], 5.00th=[17957], 10.00th=[18482], 20.00th=[19268], 00:10:06.371 | 30.00th=[22414], 40.00th=[23462], 50.00th=[24511], 60.00th=[25822], 00:10:06.371 | 70.00th=[26608], 80.00th=[27919], 90.00th=[31589], 95.00th=[34866], 00:10:06.371 | 99.00th=[38011], 99.50th=[38011], 99.90th=[38011], 99.95th=[38011], 00:10:06.371 | 99.99th=[38011] 00:10:06.371 bw ( KiB/s): min= 9744, max=10736, per=15.46%, avg=10240.00, stdev=701.45, samples=2 00:10:06.371 iops : min= 2436, max= 2684, avg=2560.00, stdev=175.36, samples=2 00:10:06.371 lat (msec) : 2=0.02%, 4=0.08%, 10=0.75%, 20=13.81%, 50=85.34% 00:10:06.371 cpu : usr=2.00%, sys=6.59%, ctx=558, majf=0, minf=19 00:10:06.371 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:06.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:06.371 issued rwts: total=2516,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.371 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:06.371 00:10:06.371 Run status group 0 (all jobs): 00:10:06.371 READ: bw=63.4MiB/s (66.4MB/s), 9940KiB/s-22.0MiB/s (10.2MB/s-23.0MB/s), io=63.6MiB (66.6MB), run=1002-1003msec 00:10:06.371 WRITE: bw=64.7MiB/s (67.8MB/s), 9.97MiB/s-22.6MiB/s (10.5MB/s-23.7MB/s), io=64.9MiB (68.0MB), run=1002-1003msec 00:10:06.371 00:10:06.371 Disk stats (read/write): 00:10:06.371 nvme0n1: ios=4810/5120, merge=0/0, ticks=40066/39855, in_queue=79921, util=88.28% 00:10:06.371 nvme0n2: ios=4782/5120, merge=0/0, ticks=24387/24110, in_queue=48497, util=89.18% 00:10:06.371 nvme0n3: ios=2054/2313, merge=0/0, ticks=16783/16871, in_queue=33654, util=88.58% 00:10:06.371 nvme0n4: ios=2048/2358, merge=0/0, ticks=16924/16910, in_queue=33834, util=89.54% 00:10:06.371 06:40:20 -- target/fio.sh@55 -- # sync 00:10:06.371 06:40:20 -- target/fio.sh@59 -- # fio_pid=63851 00:10:06.371 06:40:20 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:06.371 06:40:20 -- target/fio.sh@61 -- # sleep 3 00:10:06.371 [global] 00:10:06.371 thread=1 00:10:06.371 invalidate=1 00:10:06.371 rw=read 00:10:06.371 time_based=1 00:10:06.371 runtime=10 00:10:06.371 ioengine=libaio 00:10:06.371 direct=1 00:10:06.371 bs=4096 00:10:06.371 iodepth=1 00:10:06.371 norandommap=1 00:10:06.371 numjobs=1 00:10:06.371 00:10:06.371 [job0] 00:10:06.371 filename=/dev/nvme0n1 00:10:06.371 [job1] 00:10:06.371 filename=/dev/nvme0n2 00:10:06.371 [job2] 00:10:06.371 filename=/dev/nvme0n3 00:10:06.371 [job3] 00:10:06.371 filename=/dev/nvme0n4 00:10:06.371 Could not set queue depth (nvme0n1) 00:10:06.371 Could not set queue depth (nvme0n2) 00:10:06.371 Could not set queue depth (nvme0n3) 00:10:06.371 Could not set queue depth (nvme0n4) 00:10:06.371 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:06.371 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:06.371 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:06.371 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:06.371 fio-3.35 00:10:06.371 Starting 4 threads 00:10:09.664 06:40:23 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:09.664 fio: pid=63894, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:09.664 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=46821376, buflen=4096 00:10:09.664 06:40:23 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:09.923 fio: pid=63893, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:09.923 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=48852992, buflen=4096 00:10:09.923 06:40:23 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:09.923 06:40:23 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:09.923 fio: pid=63891, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:09.923 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=14905344, buflen=4096 00:10:10.182 06:40:23 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:10.182 06:40:23 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:10.182 fio: pid=63892, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:10.182 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=16044032, buflen=4096 00:10:10.442 06:40:24 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:10.442 06:40:24 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:10.442 00:10:10.442 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=63891: Sat Dec 14 06:40:24 2024 00:10:10.442 read: IOPS=5797, BW=22.6MiB/s (23.7MB/s)(78.2MiB/3454msec) 00:10:10.442 slat (usec): min=9, max=11611, avg=14.12, stdev=140.63 00:10:10.442 clat (usec): min=119, max=2351, avg=157.29, stdev=35.95 00:10:10.442 lat (usec): min=131, max=11794, avg=171.40, stdev=145.74 00:10:10.442 clat percentiles (usec): 00:10:10.442 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 141], 00:10:10.442 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 159], 00:10:10.442 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 182], 95.00th=[ 190], 00:10:10.442 | 99.00th=[ 208], 99.50th=[ 227], 99.90th=[ 379], 99.95th=[ 553], 00:10:10.442 | 99.99th=[ 1991] 00:10:10.442 bw ( KiB/s): min=22379, max=23776, per=33.83%, avg=23261.83, stdev=561.08, samples=6 00:10:10.442 iops : min= 5594, max= 5944, avg=5815.33, stdev=140.51, samples=6 00:10:10.442 lat (usec) : 250=99.64%, 500=0.29%, 750=0.03%, 1000=0.01% 00:10:10.442 lat (msec) : 2=0.01%, 4=0.01% 00:10:10.442 cpu : usr=1.80%, sys=6.17%, ctx=20035, majf=0, minf=1 00:10:10.442 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.442 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.442 issued rwts: total=20024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.442 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.442 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=63892: Sat Dec 14 06:40:24 2024 00:10:10.442 read: IOPS=5479, BW=21.4MiB/s (22.4MB/s)(79.3MiB/3705msec) 00:10:10.442 slat (usec): min=8, max=15818, avg=15.94, stdev=195.36 00:10:10.442 clat (usec): min=4, max=2448, avg=165.34, stdev=44.48 00:10:10.442 lat (usec): min=123, max=15986, avg=181.28, stdev=200.80 00:10:10.442 clat percentiles (usec): 00:10:10.442 | 1.00th=[ 124], 5.00th=[ 130], 10.00th=[ 135], 20.00th=[ 141], 00:10:10.442 | 30.00th=[ 145], 40.00th=[ 151], 50.00th=[ 157], 60.00th=[ 163], 00:10:10.442 | 70.00th=[ 169], 80.00th=[ 182], 90.00th=[ 215], 95.00th=[ 243], 00:10:10.442 | 99.00th=[ 273], 99.50th=[ 285], 99.90th=[ 392], 99.95th=[ 523], 00:10:10.442 | 99.99th=[ 1778] 00:10:10.442 bw ( KiB/s): min=17333, max=23312, per=31.84%, avg=21892.57, stdev=2216.87, samples=7 00:10:10.442 iops : min= 4333, max= 5828, avg=5473.00, stdev=554.37, samples=7 00:10:10.442 lat (usec) : 10=0.01%, 100=0.01%, 250=96.34%, 500=3.58%, 750=0.02% 00:10:10.442 lat (usec) : 1000=0.01% 00:10:10.442 lat (msec) : 2=0.02%, 4=0.01% 00:10:10.442 cpu : usr=1.59%, sys=6.13%, ctx=20323, majf=0, minf=2 00:10:10.442 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.442 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.442 issued rwts: total=20302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.442 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.442 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=63893: Sat Dec 14 06:40:24 2024 00:10:10.442 read: IOPS=3702, BW=14.5MiB/s (15.2MB/s)(46.6MiB/3222msec) 00:10:10.442 slat (usec): min=7, max=9383, avg=13.11, stdev=107.72 00:10:10.442 clat (usec): min=130, max=8081, avg=255.91, stdev=86.46 00:10:10.442 lat (usec): min=141, max=9587, avg=269.02, stdev=139.23 00:10:10.442 clat percentiles (usec): 00:10:10.442 | 1.00th=[ 151], 5.00th=[ 215], 10.00th=[ 227], 20.00th=[ 237], 00:10:10.442 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 262], 00:10:10.442 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 297], 00:10:10.442 | 99.00th=[ 322], 99.50th=[ 330], 99.90th=[ 545], 99.95th=[ 1037], 00:10:10.442 | 99.99th=[ 3228] 00:10:10.442 bw ( KiB/s): min=14499, max=15464, per=21.50%, avg=14783.17, stdev=345.58, samples=6 00:10:10.442 iops : min= 3624, max= 3866, avg=3695.67, stdev=86.52, samples=6 00:10:10.442 lat (usec) : 250=42.86%, 500=57.02%, 750=0.06%, 1000=0.01% 00:10:10.442 lat (msec) : 2=0.03%, 4=0.02%, 10=0.01% 00:10:10.442 cpu : usr=0.93%, sys=4.19%, ctx=11937, majf=0, minf=2 00:10:10.442 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.442 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.442 issued rwts: total=11928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.442 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.442 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=63894: Sat Dec 14 06:40:24 2024 00:10:10.442 read: IOPS=3877, BW=15.1MiB/s (15.9MB/s)(44.7MiB/2948msec) 00:10:10.442 slat (nsec): min=7440, max=96320, avg=12238.31, stdev=4088.42 00:10:10.442 clat (usec): min=137, max=569, avg=244.28, stdev=39.12 00:10:10.442 lat (usec): min=148, max=581, avg=256.52, stdev=39.01 00:10:10.442 clat percentiles (usec): 00:10:10.442 | 1.00th=[ 149], 5.00th=[ 161], 10.00th=[ 174], 20.00th=[ 227], 00:10:10.442 | 30.00th=[ 237], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 258], 00:10:10.442 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 297], 00:10:10.442 | 99.00th=[ 318], 99.50th=[ 322], 99.90th=[ 338], 99.95th=[ 351], 00:10:10.442 | 99.99th=[ 433] 00:10:10.442 bw ( KiB/s): min=14632, max=19624, per=22.79%, avg=15672.00, stdev=2210.14, samples=5 00:10:10.442 iops : min= 3658, max= 4906, avg=3918.00, stdev=552.53, samples=5 00:10:10.442 lat (usec) : 250=47.98%, 500=52.00%, 750=0.01% 00:10:10.442 cpu : usr=1.22%, sys=4.51%, ctx=11433, majf=0, minf=1 00:10:10.442 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.442 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.442 issued rwts: total=11432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.442 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.442 00:10:10.442 Run status group 0 (all jobs): 00:10:10.442 READ: bw=67.1MiB/s (70.4MB/s), 14.5MiB/s-22.6MiB/s (15.2MB/s-23.7MB/s), io=249MiB (261MB), run=2948-3705msec 00:10:10.442 00:10:10.442 Disk stats (read/write): 00:10:10.442 nvme0n1: ios=19486/0, merge=0/0, ticks=3095/0, in_queue=3095, util=95.34% 00:10:10.442 nvme0n2: ios=19742/0, merge=0/0, ticks=3290/0, in_queue=3290, util=95.16% 00:10:10.442 nvme0n3: ios=11500/0, merge=0/0, ticks=2843/0, in_queue=2843, util=96.21% 00:10:10.442 nvme0n4: ios=11144/0, merge=0/0, ticks=2637/0, in_queue=2637, util=96.79% 00:10:10.442 06:40:24 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:10.442 06:40:24 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:10.702 06:40:24 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:10.702 06:40:24 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:10.961 06:40:24 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:10.961 06:40:24 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:11.220 06:40:25 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:11.220 06:40:25 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:11.479 06:40:25 -- target/fio.sh@69 -- # fio_status=0 00:10:11.479 06:40:25 -- target/fio.sh@70 -- # wait 63851 00:10:11.479 06:40:25 -- target/fio.sh@70 -- # fio_status=4 00:10:11.479 06:40:25 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:11.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.479 06:40:25 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:11.479 06:40:25 -- common/autotest_common.sh@1208 -- # local i=0 00:10:11.479 06:40:25 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:11.479 06:40:25 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:10:11.479 06:40:25 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:10:11.479 06:40:25 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:11.479 nvmf hotplug test: fio failed as expected 00:10:11.479 06:40:25 -- common/autotest_common.sh@1220 -- # return 0 00:10:11.479 06:40:25 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:11.479 06:40:25 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:11.479 06:40:25 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:11.737 06:40:25 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:11.737 06:40:25 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:11.737 06:40:25 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:11.737 06:40:25 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:11.737 06:40:25 -- target/fio.sh@91 -- # nvmftestfini 00:10:11.737 06:40:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:11.737 06:40:25 -- nvmf/common.sh@116 -- # sync 00:10:11.737 06:40:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:11.737 06:40:25 -- nvmf/common.sh@119 -- # set +e 00:10:11.737 06:40:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:11.737 06:40:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:11.737 rmmod nvme_tcp 00:10:11.997 rmmod nvme_fabrics 00:10:11.997 rmmod nvme_keyring 00:10:11.997 06:40:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:11.997 06:40:25 -- nvmf/common.sh@123 -- # set -e 00:10:11.997 06:40:25 -- nvmf/common.sh@124 -- # return 0 00:10:11.997 06:40:25 -- nvmf/common.sh@477 -- # '[' -n 63464 ']' 00:10:11.997 06:40:25 -- nvmf/common.sh@478 -- # killprocess 63464 00:10:11.997 06:40:25 -- common/autotest_common.sh@936 -- # '[' -z 63464 ']' 00:10:11.997 06:40:25 -- common/autotest_common.sh@940 -- # kill -0 63464 00:10:11.997 06:40:25 -- common/autotest_common.sh@941 -- # uname 00:10:11.997 06:40:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:11.997 06:40:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63464 00:10:11.997 killing process with pid 63464 00:10:11.997 06:40:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:11.997 06:40:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:11.997 06:40:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63464' 00:10:11.997 06:40:25 -- common/autotest_common.sh@955 -- # kill 63464 00:10:11.997 06:40:25 -- common/autotest_common.sh@960 -- # wait 63464 00:10:11.997 06:40:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:11.997 06:40:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:11.997 06:40:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:11.997 06:40:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:11.997 06:40:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:11.997 06:40:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.997 06:40:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:11.997 06:40:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.257 06:40:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:12.257 ************************************ 00:10:12.257 END TEST nvmf_fio_target 00:10:12.257 ************************************ 00:10:12.257 00:10:12.257 real 0m19.135s 00:10:12.257 user 1m11.452s 00:10:12.257 sys 0m10.644s 00:10:12.257 06:40:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:12.257 06:40:26 -- common/autotest_common.sh@10 -- # set +x 00:10:12.257 06:40:26 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:12.257 06:40:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:12.257 06:40:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:12.257 06:40:26 -- common/autotest_common.sh@10 -- # set +x 00:10:12.257 ************************************ 00:10:12.257 START TEST nvmf_bdevio 00:10:12.257 ************************************ 00:10:12.257 06:40:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:12.257 * Looking for test storage... 00:10:12.257 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:12.257 06:40:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:12.257 06:40:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:12.257 06:40:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:12.257 06:40:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:12.257 06:40:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:12.257 06:40:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:12.257 06:40:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:12.257 06:40:26 -- scripts/common.sh@335 -- # IFS=.-: 00:10:12.257 06:40:26 -- scripts/common.sh@335 -- # read -ra ver1 00:10:12.257 06:40:26 -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.257 06:40:26 -- scripts/common.sh@336 -- # read -ra ver2 00:10:12.257 06:40:26 -- scripts/common.sh@337 -- # local 'op=<' 00:10:12.257 06:40:26 -- scripts/common.sh@339 -- # ver1_l=2 00:10:12.257 06:40:26 -- scripts/common.sh@340 -- # ver2_l=1 00:10:12.257 06:40:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:12.257 06:40:26 -- scripts/common.sh@343 -- # case "$op" in 00:10:12.257 06:40:26 -- scripts/common.sh@344 -- # : 1 00:10:12.257 06:40:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:12.257 06:40:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.257 06:40:26 -- scripts/common.sh@364 -- # decimal 1 00:10:12.257 06:40:26 -- scripts/common.sh@352 -- # local d=1 00:10:12.257 06:40:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.257 06:40:26 -- scripts/common.sh@354 -- # echo 1 00:10:12.257 06:40:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:12.257 06:40:26 -- scripts/common.sh@365 -- # decimal 2 00:10:12.257 06:40:26 -- scripts/common.sh@352 -- # local d=2 00:10:12.257 06:40:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.257 06:40:26 -- scripts/common.sh@354 -- # echo 2 00:10:12.257 06:40:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:12.257 06:40:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:12.257 06:40:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:12.257 06:40:26 -- scripts/common.sh@367 -- # return 0 00:10:12.257 06:40:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.257 06:40:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:12.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.257 --rc genhtml_branch_coverage=1 00:10:12.257 --rc genhtml_function_coverage=1 00:10:12.257 --rc genhtml_legend=1 00:10:12.257 --rc geninfo_all_blocks=1 00:10:12.257 --rc geninfo_unexecuted_blocks=1 00:10:12.257 00:10:12.257 ' 00:10:12.257 06:40:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:12.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.257 --rc genhtml_branch_coverage=1 00:10:12.257 --rc genhtml_function_coverage=1 00:10:12.257 --rc genhtml_legend=1 00:10:12.257 --rc geninfo_all_blocks=1 00:10:12.257 --rc geninfo_unexecuted_blocks=1 00:10:12.257 00:10:12.257 ' 00:10:12.257 06:40:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:12.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.257 --rc genhtml_branch_coverage=1 00:10:12.257 --rc genhtml_function_coverage=1 00:10:12.257 --rc genhtml_legend=1 00:10:12.257 --rc geninfo_all_blocks=1 00:10:12.257 --rc geninfo_unexecuted_blocks=1 00:10:12.257 00:10:12.257 ' 00:10:12.257 06:40:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:12.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.257 --rc genhtml_branch_coverage=1 00:10:12.257 --rc genhtml_function_coverage=1 00:10:12.257 --rc genhtml_legend=1 00:10:12.257 --rc geninfo_all_blocks=1 00:10:12.257 --rc geninfo_unexecuted_blocks=1 00:10:12.257 00:10:12.257 ' 00:10:12.257 06:40:26 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:12.257 06:40:26 -- nvmf/common.sh@7 -- # uname -s 00:10:12.257 06:40:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.257 06:40:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.257 06:40:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.257 06:40:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.257 06:40:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.257 06:40:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.257 06:40:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.257 06:40:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.257 06:40:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.257 06:40:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.517 06:40:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 00:10:12.517 06:40:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=1897a557-42a7-4044-982a-fbab8b2b3e32 00:10:12.517 06:40:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.517 06:40:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.517 06:40:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:12.517 06:40:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:12.517 06:40:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.517 06:40:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.517 06:40:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.517 06:40:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.517 06:40:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.517 06:40:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.517 06:40:26 -- paths/export.sh@5 -- # export PATH 00:10:12.517 06:40:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.517 06:40:26 -- nvmf/common.sh@46 -- # : 0 00:10:12.517 06:40:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:12.517 06:40:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:12.517 06:40:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:12.517 06:40:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.517 06:40:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.517 06:40:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:12.517 06:40:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:12.517 06:40:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:12.517 06:40:26 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:12.517 06:40:26 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:12.517 06:40:26 -- target/bdevio.sh@14 -- # nvmftestinit 00:10:12.517 06:40:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:12.517 06:40:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.517 06:40:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:12.517 06:40:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:12.517 06:40:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:12.517 06:40:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.517 06:40:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:12.517 06:40:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.517 06:40:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:12.517 06:40:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:12.517 06:40:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:12.517 06:40:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:12.517 06:40:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:12.517 06:40:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:12.517 06:40:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.517 06:40:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:12.517 06:40:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:12.517 06:40:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:12.517 06:40:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:12.517 06:40:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:12.517 06:40:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:12.517 06:40:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.517 06:40:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:12.517 06:40:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:12.517 06:40:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:12.517 06:40:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:12.517 06:40:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:12.517 06:40:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:12.517 Cannot find device "nvmf_tgt_br" 00:10:12.517 06:40:26 -- nvmf/common.sh@154 -- # true 00:10:12.517 06:40:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:12.517 Cannot find device "nvmf_tgt_br2" 00:10:12.517 06:40:26 -- nvmf/common.sh@155 -- # true 00:10:12.517 06:40:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:12.517 06:40:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:12.517 Cannot find device "nvmf_tgt_br" 00:10:12.517 06:40:26 -- nvmf/common.sh@157 -- # true 00:10:12.517 06:40:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:12.517 Cannot find device "nvmf_tgt_br2" 00:10:12.517 06:40:26 -- nvmf/common.sh@158 -- # true 00:10:12.517 06:40:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:12.517 06:40:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:12.517 06:40:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:12.517 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:12.517 06:40:26 -- nvmf/common.sh@161 -- # true 00:10:12.517 06:40:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:12.517 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:12.518 06:40:26 -- nvmf/common.sh@162 -- # true 00:10:12.518 06:40:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:12.518 06:40:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:12.518 06:40:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:12.518 06:40:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:12.518 06:40:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:12.518 06:40:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:12.518 06:40:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:12.518 06:40:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:12.518 06:40:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:12.518 06:40:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:12.518 06:40:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:12.518 06:40:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:12.518 06:40:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:12.518 06:40:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:12.518 06:40:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:12.518 06:40:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:12.518 06:40:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:12.777 06:40:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:12.777 06:40:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:12.777 06:40:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:12.777 06:40:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:12.777 06:40:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:12.777 06:40:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:12.777 06:40:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:12.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.777 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:10:12.777 00:10:12.777 --- 10.0.0.2 ping statistics --- 00:10:12.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.777 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:10:12.777 06:40:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:12.777 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:12.777 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:10:12.777 00:10:12.777 --- 10.0.0.3 ping statistics --- 00:10:12.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.777 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:10:12.777 06:40:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:12.777 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.777 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:10:12.777 00:10:12.777 --- 10.0.0.1 ping statistics --- 00:10:12.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.777 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:10:12.777 06:40:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.777 06:40:26 -- nvmf/common.sh@421 -- # return 0 00:10:12.777 06:40:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:12.777 06:40:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.777 06:40:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:12.777 06:40:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:12.777 06:40:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.777 06:40:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:12.777 06:40:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:12.777 06:40:26 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:12.777 06:40:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:12.777 06:40:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:12.777 06:40:26 -- common/autotest_common.sh@10 -- # set +x 00:10:12.777 06:40:26 -- nvmf/common.sh@469 -- # nvmfpid=64165 00:10:12.777 06:40:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:12.777 06:40:26 -- nvmf/common.sh@470 -- # waitforlisten 64165 00:10:12.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.777 06:40:26 -- common/autotest_common.sh@829 -- # '[' -z 64165 ']' 00:10:12.777 06:40:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.777 06:40:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:12.777 06:40:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.777 06:40:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:12.777 06:40:26 -- common/autotest_common.sh@10 -- # set +x 00:10:12.777 [2024-12-14 06:40:26.665619] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:12.777 [2024-12-14 06:40:26.665722] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.036 [2024-12-14 06:40:26.804251] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:13.036 [2024-12-14 06:40:26.856381] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:13.036 [2024-12-14 06:40:26.856775] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:13.036 [2024-12-14 06:40:26.856904] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:13.036 [2024-12-14 06:40:26.857024] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:13.036 [2024-12-14 06:40:26.857374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:13.036 [2024-12-14 06:40:26.857509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:10:13.036 [2024-12-14 06:40:26.857634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:10:13.036 [2024-12-14 06:40:26.857636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:13.988 06:40:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:13.988 06:40:27 -- common/autotest_common.sh@862 -- # return 0 00:10:13.988 06:40:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:13.988 06:40:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:13.988 06:40:27 -- common/autotest_common.sh@10 -- # set +x 00:10:13.988 06:40:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.988 06:40:27 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:13.988 06:40:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.988 06:40:27 -- common/autotest_common.sh@10 -- # set +x 00:10:13.988 [2024-12-14 06:40:27.735563] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.988 06:40:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.988 06:40:27 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:13.988 06:40:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.988 06:40:27 -- common/autotest_common.sh@10 -- # set +x 00:10:13.988 Malloc0 00:10:13.988 06:40:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.988 06:40:27 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:13.988 06:40:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.988 06:40:27 -- common/autotest_common.sh@10 -- # set +x 00:10:13.988 06:40:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.988 06:40:27 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:13.988 06:40:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.988 06:40:27 -- common/autotest_common.sh@10 -- # set +x 00:10:13.988 06:40:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.988 06:40:27 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:13.988 06:40:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.988 06:40:27 -- common/autotest_common.sh@10 -- # set +x 00:10:13.988 [2024-12-14 06:40:27.794691] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.988 06:40:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.988 06:40:27 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:13.988 06:40:27 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:13.988 06:40:27 -- nvmf/common.sh@520 -- # config=() 00:10:13.988 06:40:27 -- nvmf/common.sh@520 -- # local subsystem config 00:10:13.988 06:40:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:13.988 06:40:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:13.988 { 00:10:13.988 "params": { 00:10:13.988 "name": "Nvme$subsystem", 00:10:13.988 "trtype": "$TEST_TRANSPORT", 00:10:13.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:13.988 "adrfam": "ipv4", 00:10:13.988 "trsvcid": "$NVMF_PORT", 00:10:13.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:13.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:13.988 "hdgst": ${hdgst:-false}, 00:10:13.988 "ddgst": ${ddgst:-false} 00:10:13.988 }, 00:10:13.988 "method": "bdev_nvme_attach_controller" 00:10:13.988 } 00:10:13.988 EOF 00:10:13.988 )") 00:10:13.988 06:40:27 -- nvmf/common.sh@542 -- # cat 00:10:13.988 06:40:27 -- nvmf/common.sh@544 -- # jq . 00:10:13.988 06:40:27 -- nvmf/common.sh@545 -- # IFS=, 00:10:13.988 06:40:27 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:13.988 "params": { 00:10:13.988 "name": "Nvme1", 00:10:13.988 "trtype": "tcp", 00:10:13.988 "traddr": "10.0.0.2", 00:10:13.988 "adrfam": "ipv4", 00:10:13.988 "trsvcid": "4420", 00:10:13.988 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:13.988 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:13.988 "hdgst": false, 00:10:13.988 "ddgst": false 00:10:13.988 }, 00:10:13.988 "method": "bdev_nvme_attach_controller" 00:10:13.988 }' 00:10:13.988 [2024-12-14 06:40:27.852918] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:13.988 [2024-12-14 06:40:27.853002] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64201 ] 00:10:14.256 [2024-12-14 06:40:27.990740] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:14.256 [2024-12-14 06:40:28.062778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.256 [2024-12-14 06:40:28.062926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.256 [2024-12-14 06:40:28.062929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.256 [2024-12-14 06:40:28.200287] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:14.256 [2024-12-14 06:40:28.200713] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:14.256 I/O targets: 00:10:14.256 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:14.256 00:10:14.256 00:10:14.256 CUnit - A unit testing framework for C - Version 2.1-3 00:10:14.256 http://cunit.sourceforge.net/ 00:10:14.256 00:10:14.256 00:10:14.256 Suite: bdevio tests on: Nvme1n1 00:10:14.256 Test: blockdev write read block ...passed 00:10:14.256 Test: blockdev write zeroes read block ...passed 00:10:14.256 Test: blockdev write zeroes read no split ...passed 00:10:14.256 Test: blockdev write zeroes read split ...passed 00:10:14.256 Test: blockdev write zeroes read split partial ...passed 00:10:14.256 Test: blockdev reset ...[2024-12-14 06:40:28.234173] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:14.256 [2024-12-14 06:40:28.234421] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f0c80 (9): Bad file descriptor 00:10:14.516 [2024-12-14 06:40:28.251091] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:14.516 passed 00:10:14.516 Test: blockdev write read 8 blocks ...passed 00:10:14.516 Test: blockdev write read size > 128k ...passed 00:10:14.516 Test: blockdev write read invalid size ...passed 00:10:14.516 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:14.516 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:14.516 Test: blockdev write read max offset ...passed 00:10:14.516 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:14.516 Test: blockdev writev readv 8 blocks ...passed 00:10:14.516 Test: blockdev writev readv 30 x 1block ...passed 00:10:14.516 Test: blockdev writev readv block ...passed 00:10:14.516 Test: blockdev writev readv size > 128k ...passed 00:10:14.516 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:14.516 Test: blockdev comparev and writev ...[2024-12-14 06:40:28.262907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:14.516 [2024-12-14 06:40:28.262959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:14.516 [2024-12-14 06:40:28.262996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:14.516 [2024-12-14 06:40:28.263009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:14.516 [2024-12-14 06:40:28.263312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:14.516 [2024-12-14 06:40:28.263333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:14.516 [2024-12-14 06:40:28.263352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:14.516 [2024-12-14 06:40:28.263364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:14.516 [2024-12-14 06:40:28.263636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:14.516 [2024-12-14 06:40:28.263655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:14.516 [2024-12-14 06:40:28.263674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:14.516 [2024-12-14 06:40:28.263686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:14.516 [2024-12-14 06:40:28.264012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:14.516 [2024-12-14 06:40:28.264037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:14.516 [2024-12-14 06:40:28.264057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:14.516 [2024-12-14 06:40:28.264069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:14.516 passed 00:10:14.516 Test: blockdev nvme passthru rw ...passed 00:10:14.516 Test: blockdev nvme passthru vendor specific ...[2024-12-14 06:40:28.265768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:14.516 [2024-12-14 06:40:28.265822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:14.516 [2024-12-14 06:40:28.265978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:14.516 [2024-12-14 06:40:28.265999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:14.516 passed 00:10:14.516 Test: blockdev nvme admin passthru ...[2024-12-14 06:40:28.266473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:14.516 [2024-12-14 06:40:28.266520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:14.516 [2024-12-14 06:40:28.266635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:14.516 [2024-12-14 06:40:28.266653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:14.516 passed 00:10:14.516 Test: blockdev copy ...passed 00:10:14.516 00:10:14.516 Run Summary: Type Total Ran Passed Failed Inactive 00:10:14.516 suites 1 1 n/a 0 0 00:10:14.516 tests 23 23 23 0 0 00:10:14.516 asserts 152 152 152 0 n/a 00:10:14.516 00:10:14.516 Elapsed time = 0.162 seconds 00:10:14.516 06:40:28 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:14.516 06:40:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.516 06:40:28 -- common/autotest_common.sh@10 -- # set +x 00:10:14.516 06:40:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.516 06:40:28 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:14.516 06:40:28 -- target/bdevio.sh@30 -- # nvmftestfini 00:10:14.516 06:40:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:14.516 06:40:28 -- nvmf/common.sh@116 -- # sync 00:10:14.775 06:40:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:14.775 06:40:28 -- nvmf/common.sh@119 -- # set +e 00:10:14.775 06:40:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:14.775 06:40:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:14.775 rmmod nvme_tcp 00:10:14.775 rmmod nvme_fabrics 00:10:14.775 rmmod nvme_keyring 00:10:14.775 06:40:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:14.775 06:40:28 -- nvmf/common.sh@123 -- # set -e 00:10:14.775 06:40:28 -- nvmf/common.sh@124 -- # return 0 00:10:14.775 06:40:28 -- nvmf/common.sh@477 -- # '[' -n 64165 ']' 00:10:14.775 06:40:28 -- nvmf/common.sh@478 -- # killprocess 64165 00:10:14.775 06:40:28 -- common/autotest_common.sh@936 -- # '[' -z 64165 ']' 00:10:14.775 06:40:28 -- common/autotest_common.sh@940 -- # kill -0 64165 00:10:14.775 06:40:28 -- common/autotest_common.sh@941 -- # uname 00:10:14.775 06:40:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:14.775 06:40:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64165 00:10:14.775 06:40:28 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:10:14.775 06:40:28 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:10:14.775 killing process with pid 64165 00:10:14.775 06:40:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64165' 00:10:14.775 06:40:28 -- common/autotest_common.sh@955 -- # kill 64165 00:10:14.775 06:40:28 -- common/autotest_common.sh@960 -- # wait 64165 00:10:15.035 06:40:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:15.036 06:40:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:15.036 06:40:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:15.036 06:40:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:15.036 06:40:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:15.036 06:40:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.036 06:40:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:15.036 06:40:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.036 06:40:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:15.036 ************************************ 00:10:15.036 END TEST nvmf_bdevio 00:10:15.036 ************************************ 00:10:15.036 00:10:15.036 real 0m2.772s 00:10:15.036 user 0m8.991s 00:10:15.036 sys 0m0.659s 00:10:15.036 06:40:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:15.036 06:40:28 -- common/autotest_common.sh@10 -- # set +x 00:10:15.036 06:40:28 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:10:15.036 06:40:28 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:10:15.036 06:40:28 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:15.036 06:40:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:15.036 06:40:28 -- common/autotest_common.sh@10 -- # set +x 00:10:15.036 ************************************ 00:10:15.036 START TEST nvmf_bdevio_no_huge 00:10:15.036 ************************************ 00:10:15.036 06:40:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:10:15.036 * Looking for test storage... 00:10:15.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:15.036 06:40:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:15.036 06:40:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:15.036 06:40:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:15.036 06:40:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:15.036 06:40:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:15.036 06:40:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:15.036 06:40:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:15.036 06:40:29 -- scripts/common.sh@335 -- # IFS=.-: 00:10:15.036 06:40:29 -- scripts/common.sh@335 -- # read -ra ver1 00:10:15.036 06:40:29 -- scripts/common.sh@336 -- # IFS=.-: 00:10:15.036 06:40:29 -- scripts/common.sh@336 -- # read -ra ver2 00:10:15.036 06:40:29 -- scripts/common.sh@337 -- # local 'op=<' 00:10:15.036 06:40:29 -- scripts/common.sh@339 -- # ver1_l=2 00:10:15.036 06:40:29 -- scripts/common.sh@340 -- # ver2_l=1 00:10:15.036 06:40:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:15.036 06:40:29 -- scripts/common.sh@343 -- # case "$op" in 00:10:15.036 06:40:29 -- scripts/common.sh@344 -- # : 1 00:10:15.036 06:40:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:15.036 06:40:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:15.036 06:40:29 -- scripts/common.sh@364 -- # decimal 1 00:10:15.296 06:40:29 -- scripts/common.sh@352 -- # local d=1 00:10:15.296 06:40:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:15.296 06:40:29 -- scripts/common.sh@354 -- # echo 1 00:10:15.296 06:40:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:15.296 06:40:29 -- scripts/common.sh@365 -- # decimal 2 00:10:15.296 06:40:29 -- scripts/common.sh@352 -- # local d=2 00:10:15.296 06:40:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:15.296 06:40:29 -- scripts/common.sh@354 -- # echo 2 00:10:15.296 06:40:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:15.296 06:40:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:15.296 06:40:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:15.296 06:40:29 -- scripts/common.sh@367 -- # return 0 00:10:15.296 06:40:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:15.296 06:40:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:15.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.296 --rc genhtml_branch_coverage=1 00:10:15.296 --rc genhtml_function_coverage=1 00:10:15.296 --rc genhtml_legend=1 00:10:15.296 --rc geninfo_all_blocks=1 00:10:15.296 --rc geninfo_unexecuted_blocks=1 00:10:15.296 00:10:15.296 ' 00:10:15.296 06:40:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:15.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.296 --rc genhtml_branch_coverage=1 00:10:15.296 --rc genhtml_function_coverage=1 00:10:15.296 --rc genhtml_legend=1 00:10:15.296 --rc geninfo_all_blocks=1 00:10:15.296 --rc geninfo_unexecuted_blocks=1 00:10:15.296 00:10:15.296 ' 00:10:15.296 06:40:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:15.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.296 --rc genhtml_branch_coverage=1 00:10:15.296 --rc genhtml_function_coverage=1 00:10:15.296 --rc genhtml_legend=1 00:10:15.296 --rc geninfo_all_blocks=1 00:10:15.296 --rc geninfo_unexecuted_blocks=1 00:10:15.296 00:10:15.296 ' 00:10:15.296 06:40:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:15.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.296 --rc genhtml_branch_coverage=1 00:10:15.296 --rc genhtml_function_coverage=1 00:10:15.296 --rc genhtml_legend=1 00:10:15.296 --rc geninfo_all_blocks=1 00:10:15.296 --rc geninfo_unexecuted_blocks=1 00:10:15.296 00:10:15.296 ' 00:10:15.296 06:40:29 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:15.296 06:40:29 -- nvmf/common.sh@7 -- # uname -s 00:10:15.296 06:40:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:15.296 06:40:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:15.296 06:40:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:15.296 06:40:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:15.296 06:40:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:15.296 06:40:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:15.296 06:40:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:15.296 06:40:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:15.296 06:40:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:15.296 06:40:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:15.296 06:40:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 00:10:15.296 06:40:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=1897a557-42a7-4044-982a-fbab8b2b3e32 00:10:15.296 06:40:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:15.296 06:40:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:15.296 06:40:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:15.296 06:40:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:15.296 06:40:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:15.296 06:40:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:15.296 06:40:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:15.296 06:40:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.296 06:40:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.296 06:40:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.296 06:40:29 -- paths/export.sh@5 -- # export PATH 00:10:15.296 06:40:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.296 06:40:29 -- nvmf/common.sh@46 -- # : 0 00:10:15.296 06:40:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:15.296 06:40:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:15.296 06:40:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:15.296 06:40:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:15.296 06:40:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:15.296 06:40:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:15.296 06:40:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:15.296 06:40:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:15.296 06:40:29 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:15.296 06:40:29 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:15.296 06:40:29 -- target/bdevio.sh@14 -- # nvmftestinit 00:10:15.296 06:40:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:15.296 06:40:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:15.296 06:40:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:15.296 06:40:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:15.296 06:40:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:15.296 06:40:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.296 06:40:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:15.296 06:40:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.296 06:40:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:15.296 06:40:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:15.296 06:40:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:15.296 06:40:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:15.296 06:40:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:15.296 06:40:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:15.296 06:40:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:15.296 06:40:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:15.296 06:40:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:15.296 06:40:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:15.296 06:40:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:15.296 06:40:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:15.296 06:40:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:15.296 06:40:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:15.296 06:40:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:15.296 06:40:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:15.296 06:40:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:15.296 06:40:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:15.297 06:40:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:15.297 06:40:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:15.297 Cannot find device "nvmf_tgt_br" 00:10:15.297 06:40:29 -- nvmf/common.sh@154 -- # true 00:10:15.297 06:40:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:15.297 Cannot find device "nvmf_tgt_br2" 00:10:15.297 06:40:29 -- nvmf/common.sh@155 -- # true 00:10:15.297 06:40:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:15.297 06:40:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:15.297 Cannot find device "nvmf_tgt_br" 00:10:15.297 06:40:29 -- nvmf/common.sh@157 -- # true 00:10:15.297 06:40:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:15.297 Cannot find device "nvmf_tgt_br2" 00:10:15.297 06:40:29 -- nvmf/common.sh@158 -- # true 00:10:15.297 06:40:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:15.297 06:40:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:15.297 06:40:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:15.297 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:15.297 06:40:29 -- nvmf/common.sh@161 -- # true 00:10:15.297 06:40:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:15.297 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:15.297 06:40:29 -- nvmf/common.sh@162 -- # true 00:10:15.297 06:40:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:15.297 06:40:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:15.297 06:40:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:15.297 06:40:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:15.297 06:40:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:15.297 06:40:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:15.297 06:40:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:15.297 06:40:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:15.297 06:40:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:15.297 06:40:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:15.297 06:40:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:15.556 06:40:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:15.556 06:40:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:15.556 06:40:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:15.556 06:40:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:15.556 06:40:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:15.556 06:40:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:15.556 06:40:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:15.556 06:40:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:15.556 06:40:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:15.556 06:40:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:15.556 06:40:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:15.556 06:40:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:15.556 06:40:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:15.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:15.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:10:15.556 00:10:15.556 --- 10.0.0.2 ping statistics --- 00:10:15.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.556 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:10:15.556 06:40:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:15.556 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:15.556 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:10:15.556 00:10:15.556 --- 10.0.0.3 ping statistics --- 00:10:15.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.556 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:10:15.556 06:40:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:15.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:15.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:10:15.556 00:10:15.556 --- 10.0.0.1 ping statistics --- 00:10:15.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:15.556 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:15.556 06:40:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:15.556 06:40:29 -- nvmf/common.sh@421 -- # return 0 00:10:15.556 06:40:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:15.556 06:40:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:15.556 06:40:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:15.556 06:40:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:15.556 06:40:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:15.556 06:40:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:15.556 06:40:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:15.556 06:40:29 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:15.556 06:40:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:15.556 06:40:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:15.556 06:40:29 -- common/autotest_common.sh@10 -- # set +x 00:10:15.556 06:40:29 -- nvmf/common.sh@469 -- # nvmfpid=64383 00:10:15.556 06:40:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:10:15.556 06:40:29 -- nvmf/common.sh@470 -- # waitforlisten 64383 00:10:15.556 06:40:29 -- common/autotest_common.sh@829 -- # '[' -z 64383 ']' 00:10:15.557 06:40:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.557 06:40:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:15.557 06:40:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.557 06:40:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:15.557 06:40:29 -- common/autotest_common.sh@10 -- # set +x 00:10:15.557 [2024-12-14 06:40:29.455958] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:15.557 [2024-12-14 06:40:29.456051] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:10:15.816 [2024-12-14 06:40:29.595844] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:15.816 [2024-12-14 06:40:29.729096] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:15.816 [2024-12-14 06:40:29.729272] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.816 [2024-12-14 06:40:29.729287] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.816 [2024-12-14 06:40:29.729306] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.816 [2024-12-14 06:40:29.729475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:15.816 [2024-12-14 06:40:29.729862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:10:15.816 [2024-12-14 06:40:29.729990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:10:15.816 [2024-12-14 06:40:29.729992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:16.753 06:40:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:16.753 06:40:30 -- common/autotest_common.sh@862 -- # return 0 00:10:16.753 06:40:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:16.753 06:40:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:16.753 06:40:30 -- common/autotest_common.sh@10 -- # set +x 00:10:16.753 06:40:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:16.753 06:40:30 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:16.753 06:40:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.753 06:40:30 -- common/autotest_common.sh@10 -- # set +x 00:10:16.753 [2024-12-14 06:40:30.524708] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:16.753 06:40:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.753 06:40:30 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:16.753 06:40:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.753 06:40:30 -- common/autotest_common.sh@10 -- # set +x 00:10:16.753 Malloc0 00:10:16.753 06:40:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.753 06:40:30 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:16.753 06:40:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.753 06:40:30 -- common/autotest_common.sh@10 -- # set +x 00:10:16.753 06:40:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.753 06:40:30 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:16.753 06:40:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.753 06:40:30 -- common/autotest_common.sh@10 -- # set +x 00:10:16.753 06:40:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.753 06:40:30 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:16.753 06:40:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.753 06:40:30 -- common/autotest_common.sh@10 -- # set +x 00:10:16.753 [2024-12-14 06:40:30.562842] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:16.753 06:40:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.753 06:40:30 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:10:16.753 06:40:30 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:16.753 06:40:30 -- nvmf/common.sh@520 -- # config=() 00:10:16.753 06:40:30 -- nvmf/common.sh@520 -- # local subsystem config 00:10:16.753 06:40:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:16.753 06:40:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:16.753 { 00:10:16.753 "params": { 00:10:16.753 "name": "Nvme$subsystem", 00:10:16.753 "trtype": "$TEST_TRANSPORT", 00:10:16.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:16.753 "adrfam": "ipv4", 00:10:16.753 "trsvcid": "$NVMF_PORT", 00:10:16.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:16.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:16.753 "hdgst": ${hdgst:-false}, 00:10:16.753 "ddgst": ${ddgst:-false} 00:10:16.753 }, 00:10:16.753 "method": "bdev_nvme_attach_controller" 00:10:16.753 } 00:10:16.753 EOF 00:10:16.753 )") 00:10:16.753 06:40:30 -- nvmf/common.sh@542 -- # cat 00:10:16.753 06:40:30 -- nvmf/common.sh@544 -- # jq . 00:10:16.753 06:40:30 -- nvmf/common.sh@545 -- # IFS=, 00:10:16.753 06:40:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:16.753 "params": { 00:10:16.753 "name": "Nvme1", 00:10:16.753 "trtype": "tcp", 00:10:16.753 "traddr": "10.0.0.2", 00:10:16.753 "adrfam": "ipv4", 00:10:16.753 "trsvcid": "4420", 00:10:16.753 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:16.753 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:16.753 "hdgst": false, 00:10:16.753 "ddgst": false 00:10:16.753 }, 00:10:16.753 "method": "bdev_nvme_attach_controller" 00:10:16.753 }' 00:10:16.753 [2024-12-14 06:40:30.620158] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:16.753 [2024-12-14 06:40:30.620260] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid64423 ] 00:10:17.012 [2024-12-14 06:40:30.764144] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:17.012 [2024-12-14 06:40:30.896720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.012 [2024-12-14 06:40:30.896947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:17.012 [2024-12-14 06:40:30.897145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.272 [2024-12-14 06:40:31.066576] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:17.272 [2024-12-14 06:40:31.066812] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:17.272 I/O targets: 00:10:17.272 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:17.272 00:10:17.272 00:10:17.272 CUnit - A unit testing framework for C - Version 2.1-3 00:10:17.272 http://cunit.sourceforge.net/ 00:10:17.272 00:10:17.272 00:10:17.272 Suite: bdevio tests on: Nvme1n1 00:10:17.272 Test: blockdev write read block ...passed 00:10:17.272 Test: blockdev write zeroes read block ...passed 00:10:17.272 Test: blockdev write zeroes read no split ...passed 00:10:17.272 Test: blockdev write zeroes read split ...passed 00:10:17.272 Test: blockdev write zeroes read split partial ...passed 00:10:17.272 Test: blockdev reset ...[2024-12-14 06:40:31.105971] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:17.272 [2024-12-14 06:40:31.106181] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x151a680 (9): Bad file descriptor 00:10:17.272 [2024-12-14 06:40:31.125435] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:17.272 passed 00:10:17.272 Test: blockdev write read 8 blocks ...passed 00:10:17.272 Test: blockdev write read size > 128k ...passed 00:10:17.272 Test: blockdev write read invalid size ...passed 00:10:17.272 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:17.272 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:17.272 Test: blockdev write read max offset ...passed 00:10:17.272 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:17.272 Test: blockdev writev readv 8 blocks ...passed 00:10:17.272 Test: blockdev writev readv 30 x 1block ...passed 00:10:17.272 Test: blockdev writev readv block ...passed 00:10:17.272 Test: blockdev writev readv size > 128k ...passed 00:10:17.272 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:17.272 Test: blockdev comparev and writev ...[2024-12-14 06:40:31.138284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.272 [2024-12-14 06:40:31.138813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:17.272 [2024-12-14 06:40:31.139475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.272 [2024-12-14 06:40:31.139970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:17.272 [2024-12-14 06:40:31.140779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.272 [2024-12-14 06:40:31.141425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:17.272 [2024-12-14 06:40:31.141655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.272 [2024-12-14 06:40:31.141817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:17.272 [2024-12-14 06:40:31.142166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.272 [2024-12-14 06:40:31.142205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:17.272 [2024-12-14 06:40:31.142223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.272 [2024-12-14 06:40:31.142234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:17.272 [2024-12-14 06:40:31.142510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.272 [2024-12-14 06:40:31.142531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:17.272 [2024-12-14 06:40:31.142548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:17.272 [2024-12-14 06:40:31.142558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:17.272 passed 00:10:17.272 Test: blockdev nvme passthru rw ...passed 00:10:17.272 Test: blockdev nvme passthru vendor specific ...[2024-12-14 06:40:31.143699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:17.272 [2024-12-14 06:40:31.143843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:17.272 [2024-12-14 06:40:31.144130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:17.272 [2024-12-14 06:40:31.144222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:17.272 [2024-12-14 06:40:31.144518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:17.272 [2024-12-14 06:40:31.144550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:17.272 [2024-12-14 06:40:31.144662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:17.272 [2024-12-14 06:40:31.144914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:17.272 passed 00:10:17.272 Test: blockdev nvme admin passthru ...passed 00:10:17.272 Test: blockdev copy ...passed 00:10:17.272 00:10:17.272 Run Summary: Type Total Ran Passed Failed Inactive 00:10:17.272 suites 1 1 n/a 0 0 00:10:17.272 tests 23 23 23 0 0 00:10:17.272 asserts 152 152 152 0 n/a 00:10:17.272 00:10:17.272 Elapsed time = 0.169 seconds 00:10:17.532 06:40:31 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:17.532 06:40:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.532 06:40:31 -- common/autotest_common.sh@10 -- # set +x 00:10:17.532 06:40:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.532 06:40:31 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:17.532 06:40:31 -- target/bdevio.sh@30 -- # nvmftestfini 00:10:17.532 06:40:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:17.532 06:40:31 -- nvmf/common.sh@116 -- # sync 00:10:17.791 06:40:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:17.791 06:40:31 -- nvmf/common.sh@119 -- # set +e 00:10:17.791 06:40:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:17.791 06:40:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:17.791 rmmod nvme_tcp 00:10:17.791 rmmod nvme_fabrics 00:10:17.791 rmmod nvme_keyring 00:10:17.791 06:40:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:17.791 06:40:31 -- nvmf/common.sh@123 -- # set -e 00:10:17.791 06:40:31 -- nvmf/common.sh@124 -- # return 0 00:10:17.791 06:40:31 -- nvmf/common.sh@477 -- # '[' -n 64383 ']' 00:10:17.791 06:40:31 -- nvmf/common.sh@478 -- # killprocess 64383 00:10:17.791 06:40:31 -- common/autotest_common.sh@936 -- # '[' -z 64383 ']' 00:10:17.791 06:40:31 -- common/autotest_common.sh@940 -- # kill -0 64383 00:10:17.791 06:40:31 -- common/autotest_common.sh@941 -- # uname 00:10:17.791 06:40:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:17.791 06:40:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64383 00:10:17.791 06:40:31 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:10:17.791 06:40:31 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:10:17.791 06:40:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64383' 00:10:17.791 killing process with pid 64383 00:10:17.791 06:40:31 -- common/autotest_common.sh@955 -- # kill 64383 00:10:17.791 06:40:31 -- common/autotest_common.sh@960 -- # wait 64383 00:10:18.050 06:40:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:18.050 06:40:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:18.050 06:40:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:18.050 06:40:31 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:18.050 06:40:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:18.050 06:40:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.050 06:40:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:18.050 06:40:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.050 06:40:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:18.050 00:10:18.050 real 0m3.163s 00:10:18.050 user 0m10.289s 00:10:18.050 sys 0m1.177s 00:10:18.050 06:40:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:18.050 06:40:32 -- common/autotest_common.sh@10 -- # set +x 00:10:18.050 ************************************ 00:10:18.050 END TEST nvmf_bdevio_no_huge 00:10:18.050 ************************************ 00:10:18.310 06:40:32 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:10:18.310 06:40:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:18.310 06:40:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:18.310 06:40:32 -- common/autotest_common.sh@10 -- # set +x 00:10:18.310 ************************************ 00:10:18.310 START TEST nvmf_tls 00:10:18.310 ************************************ 00:10:18.310 06:40:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:10:18.310 * Looking for test storage... 00:10:18.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:18.310 06:40:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:18.310 06:40:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:18.310 06:40:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:18.310 06:40:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:18.310 06:40:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:18.310 06:40:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:18.310 06:40:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:18.310 06:40:32 -- scripts/common.sh@335 -- # IFS=.-: 00:10:18.310 06:40:32 -- scripts/common.sh@335 -- # read -ra ver1 00:10:18.310 06:40:32 -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.310 06:40:32 -- scripts/common.sh@336 -- # read -ra ver2 00:10:18.310 06:40:32 -- scripts/common.sh@337 -- # local 'op=<' 00:10:18.310 06:40:32 -- scripts/common.sh@339 -- # ver1_l=2 00:10:18.310 06:40:32 -- scripts/common.sh@340 -- # ver2_l=1 00:10:18.310 06:40:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:18.310 06:40:32 -- scripts/common.sh@343 -- # case "$op" in 00:10:18.310 06:40:32 -- scripts/common.sh@344 -- # : 1 00:10:18.310 06:40:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:18.310 06:40:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.310 06:40:32 -- scripts/common.sh@364 -- # decimal 1 00:10:18.310 06:40:32 -- scripts/common.sh@352 -- # local d=1 00:10:18.310 06:40:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.310 06:40:32 -- scripts/common.sh@354 -- # echo 1 00:10:18.310 06:40:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:18.310 06:40:32 -- scripts/common.sh@365 -- # decimal 2 00:10:18.310 06:40:32 -- scripts/common.sh@352 -- # local d=2 00:10:18.310 06:40:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.310 06:40:32 -- scripts/common.sh@354 -- # echo 2 00:10:18.310 06:40:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:18.310 06:40:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:18.310 06:40:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:18.310 06:40:32 -- scripts/common.sh@367 -- # return 0 00:10:18.310 06:40:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.310 06:40:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:18.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.310 --rc genhtml_branch_coverage=1 00:10:18.310 --rc genhtml_function_coverage=1 00:10:18.310 --rc genhtml_legend=1 00:10:18.310 --rc geninfo_all_blocks=1 00:10:18.310 --rc geninfo_unexecuted_blocks=1 00:10:18.310 00:10:18.310 ' 00:10:18.310 06:40:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:18.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.310 --rc genhtml_branch_coverage=1 00:10:18.310 --rc genhtml_function_coverage=1 00:10:18.310 --rc genhtml_legend=1 00:10:18.310 --rc geninfo_all_blocks=1 00:10:18.310 --rc geninfo_unexecuted_blocks=1 00:10:18.310 00:10:18.310 ' 00:10:18.310 06:40:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:18.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.310 --rc genhtml_branch_coverage=1 00:10:18.310 --rc genhtml_function_coverage=1 00:10:18.310 --rc genhtml_legend=1 00:10:18.310 --rc geninfo_all_blocks=1 00:10:18.310 --rc geninfo_unexecuted_blocks=1 00:10:18.310 00:10:18.310 ' 00:10:18.310 06:40:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:18.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.310 --rc genhtml_branch_coverage=1 00:10:18.310 --rc genhtml_function_coverage=1 00:10:18.310 --rc genhtml_legend=1 00:10:18.310 --rc geninfo_all_blocks=1 00:10:18.310 --rc geninfo_unexecuted_blocks=1 00:10:18.310 00:10:18.310 ' 00:10:18.310 06:40:32 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:18.310 06:40:32 -- nvmf/common.sh@7 -- # uname -s 00:10:18.310 06:40:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.310 06:40:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.310 06:40:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.310 06:40:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.310 06:40:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.310 06:40:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.310 06:40:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.310 06:40:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.310 06:40:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.310 06:40:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.310 06:40:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 00:10:18.310 06:40:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=1897a557-42a7-4044-982a-fbab8b2b3e32 00:10:18.310 06:40:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.310 06:40:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.310 06:40:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:18.310 06:40:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:18.310 06:40:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.310 06:40:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.310 06:40:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.310 06:40:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.310 06:40:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.311 06:40:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.311 06:40:32 -- paths/export.sh@5 -- # export PATH 00:10:18.311 06:40:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.311 06:40:32 -- nvmf/common.sh@46 -- # : 0 00:10:18.311 06:40:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:18.311 06:40:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:18.311 06:40:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:18.311 06:40:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.311 06:40:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.311 06:40:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:18.311 06:40:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:18.311 06:40:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:18.311 06:40:32 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:18.311 06:40:32 -- target/tls.sh@71 -- # nvmftestinit 00:10:18.311 06:40:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:18.311 06:40:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:18.311 06:40:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:18.311 06:40:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:18.311 06:40:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:18.311 06:40:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.311 06:40:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:18.311 06:40:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.311 06:40:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:18.311 06:40:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:18.311 06:40:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:18.311 06:40:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:18.311 06:40:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:18.311 06:40:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:18.311 06:40:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:18.311 06:40:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:18.311 06:40:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:18.311 06:40:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:18.311 06:40:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:18.311 06:40:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:18.311 06:40:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:18.311 06:40:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:18.311 06:40:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:18.570 06:40:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:18.570 06:40:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:18.570 06:40:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:18.570 06:40:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:18.570 06:40:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:18.570 Cannot find device "nvmf_tgt_br" 00:10:18.570 06:40:32 -- nvmf/common.sh@154 -- # true 00:10:18.570 06:40:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:18.570 Cannot find device "nvmf_tgt_br2" 00:10:18.570 06:40:32 -- nvmf/common.sh@155 -- # true 00:10:18.570 06:40:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:18.570 06:40:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:18.570 Cannot find device "nvmf_tgt_br" 00:10:18.570 06:40:32 -- nvmf/common.sh@157 -- # true 00:10:18.570 06:40:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:18.570 Cannot find device "nvmf_tgt_br2" 00:10:18.570 06:40:32 -- nvmf/common.sh@158 -- # true 00:10:18.570 06:40:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:18.570 06:40:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:18.570 06:40:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:18.570 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:18.570 06:40:32 -- nvmf/common.sh@161 -- # true 00:10:18.570 06:40:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:18.570 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:18.570 06:40:32 -- nvmf/common.sh@162 -- # true 00:10:18.570 06:40:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:18.570 06:40:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:18.570 06:40:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:18.570 06:40:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:18.570 06:40:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:18.570 06:40:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:18.570 06:40:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:18.829 06:40:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:18.829 06:40:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:18.829 06:40:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:18.829 06:40:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:18.829 06:40:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:18.829 06:40:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:18.829 06:40:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:18.829 06:40:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:18.829 06:40:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:18.829 06:40:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:18.829 06:40:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:18.829 06:40:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:18.829 06:40:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:18.829 06:40:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:18.829 06:40:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:18.829 06:40:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:18.829 06:40:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:18.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:18.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:10:18.829 00:10:18.829 --- 10.0.0.2 ping statistics --- 00:10:18.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.829 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:10:18.829 06:40:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:18.829 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:18.829 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:10:18.829 00:10:18.829 --- 10.0.0.3 ping statistics --- 00:10:18.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.829 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:10:18.829 06:40:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:18.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:18.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:10:18.829 00:10:18.829 --- 10.0.0.1 ping statistics --- 00:10:18.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.829 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:10:18.829 06:40:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.829 06:40:32 -- nvmf/common.sh@421 -- # return 0 00:10:18.829 06:40:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:18.829 06:40:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.829 06:40:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:18.829 06:40:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:18.829 06:40:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.829 06:40:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:18.829 06:40:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:18.829 06:40:32 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:10:18.829 06:40:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:18.829 06:40:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:18.829 06:40:32 -- common/autotest_common.sh@10 -- # set +x 00:10:18.829 06:40:32 -- nvmf/common.sh@469 -- # nvmfpid=64609 00:10:18.829 06:40:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:10:18.829 06:40:32 -- nvmf/common.sh@470 -- # waitforlisten 64609 00:10:18.829 06:40:32 -- common/autotest_common.sh@829 -- # '[' -z 64609 ']' 00:10:18.829 06:40:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.829 06:40:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:18.829 06:40:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.829 06:40:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:18.829 06:40:32 -- common/autotest_common.sh@10 -- # set +x 00:10:18.829 [2024-12-14 06:40:32.757546] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:18.829 [2024-12-14 06:40:32.757660] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:19.088 [2024-12-14 06:40:32.905968] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.088 [2024-12-14 06:40:32.973717] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:19.088 [2024-12-14 06:40:32.973952] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:19.088 [2024-12-14 06:40:32.973970] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:19.088 [2024-12-14 06:40:32.973981] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:19.088 [2024-12-14 06:40:32.974019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.025 06:40:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:20.025 06:40:33 -- common/autotest_common.sh@862 -- # return 0 00:10:20.025 06:40:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:20.025 06:40:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:20.025 06:40:33 -- common/autotest_common.sh@10 -- # set +x 00:10:20.025 06:40:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:20.025 06:40:33 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:10:20.025 06:40:33 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:10:20.284 true 00:10:20.284 06:40:34 -- target/tls.sh@82 -- # jq -r .tls_version 00:10:20.284 06:40:34 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:20.542 06:40:34 -- target/tls.sh@82 -- # version=0 00:10:20.542 06:40:34 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:10:20.542 06:40:34 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:10:20.801 06:40:34 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:20.801 06:40:34 -- target/tls.sh@90 -- # jq -r .tls_version 00:10:21.060 06:40:34 -- target/tls.sh@90 -- # version=13 00:10:21.060 06:40:34 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:10:21.060 06:40:34 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:10:21.060 06:40:35 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:21.060 06:40:35 -- target/tls.sh@98 -- # jq -r .tls_version 00:10:21.321 06:40:35 -- target/tls.sh@98 -- # version=7 00:10:21.321 06:40:35 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:10:21.321 06:40:35 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:21.321 06:40:35 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:10:21.580 06:40:35 -- target/tls.sh@105 -- # ktls=false 00:10:21.580 06:40:35 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:10:21.580 06:40:35 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:10:21.839 06:40:35 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:21.839 06:40:35 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:10:22.098 06:40:35 -- target/tls.sh@113 -- # ktls=true 00:10:22.098 06:40:35 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:10:22.098 06:40:35 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:10:22.357 06:40:36 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:10:22.357 06:40:36 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:10:22.616 06:40:36 -- target/tls.sh@121 -- # ktls=false 00:10:22.616 06:40:36 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:10:22.616 06:40:36 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:10:22.616 06:40:36 -- target/tls.sh@49 -- # local key hash crc 00:10:22.616 06:40:36 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:10:22.616 06:40:36 -- target/tls.sh@51 -- # hash=01 00:10:22.616 06:40:36 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:10:22.616 06:40:36 -- target/tls.sh@52 -- # gzip -1 -c 00:10:22.616 06:40:36 -- target/tls.sh@52 -- # tail -c8 00:10:22.616 06:40:36 -- target/tls.sh@52 -- # head -c 4 00:10:22.616 06:40:36 -- target/tls.sh@52 -- # crc='p$H�' 00:10:22.616 06:40:36 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:10:22.616 06:40:36 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:10:22.616 06:40:36 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:10:22.616 06:40:36 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:10:22.616 06:40:36 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:10:22.616 06:40:36 -- target/tls.sh@49 -- # local key hash crc 00:10:22.616 06:40:36 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:10:22.617 06:40:36 -- target/tls.sh@51 -- # hash=01 00:10:22.617 06:40:36 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:10:22.617 06:40:36 -- target/tls.sh@52 -- # gzip -1 -c 00:10:22.617 06:40:36 -- target/tls.sh@52 -- # tail -c8 00:10:22.617 06:40:36 -- target/tls.sh@52 -- # head -c 4 00:10:22.617 06:40:36 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:10:22.617 06:40:36 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:10:22.617 06:40:36 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:10:22.617 06:40:36 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:10:22.617 06:40:36 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:10:22.617 06:40:36 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:22.617 06:40:36 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:22.617 06:40:36 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:10:22.617 06:40:36 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:10:22.617 06:40:36 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:22.617 06:40:36 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:22.617 06:40:36 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:10:22.876 06:40:36 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:10:23.135 06:40:37 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:23.135 06:40:37 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:23.135 06:40:37 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:10:23.394 [2024-12-14 06:40:37.360162] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:23.394 06:40:37 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:10:23.652 06:40:37 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:10:23.911 [2024-12-14 06:40:37.824241] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:10:23.911 [2024-12-14 06:40:37.824472] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:23.911 06:40:37 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:10:24.170 malloc0 00:10:24.170 06:40:38 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:24.429 06:40:38 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:24.689 06:40:38 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:36.896 Initializing NVMe Controllers 00:10:36.896 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:36.896 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:36.896 Initialization complete. Launching workers. 00:10:36.896 ======================================================== 00:10:36.896 Latency(us) 00:10:36.896 Device Information : IOPS MiB/s Average min max 00:10:36.896 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10929.60 42.69 5856.88 1509.50 8490.89 00:10:36.896 ======================================================== 00:10:36.896 Total : 10929.60 42.69 5856.88 1509.50 8490.89 00:10:36.896 00:10:36.896 06:40:48 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:36.896 06:40:48 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:36.896 06:40:48 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:36.896 06:40:48 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:36.896 06:40:48 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:10:36.896 06:40:48 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:36.896 06:40:48 -- target/tls.sh@28 -- # bdevperf_pid=64853 00:10:36.896 06:40:48 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:36.896 06:40:48 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:36.896 06:40:48 -- target/tls.sh@31 -- # waitforlisten 64853 /var/tmp/bdevperf.sock 00:10:36.896 06:40:48 -- common/autotest_common.sh@829 -- # '[' -z 64853 ']' 00:10:36.896 06:40:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:36.896 06:40:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:36.896 06:40:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:36.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:36.897 06:40:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:36.897 06:40:48 -- common/autotest_common.sh@10 -- # set +x 00:10:36.897 [2024-12-14 06:40:48.725539] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:36.897 [2024-12-14 06:40:48.725832] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64853 ] 00:10:36.897 [2024-12-14 06:40:48.866301] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.897 [2024-12-14 06:40:48.935230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:36.897 06:40:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:36.897 06:40:49 -- common/autotest_common.sh@862 -- # return 0 00:10:36.897 06:40:49 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:36.897 [2024-12-14 06:40:49.885047] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:36.897 TLSTESTn1 00:10:36.897 06:40:49 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:10:36.897 Running I/O for 10 seconds... 00:10:46.872 00:10:46.872 Latency(us) 00:10:46.872 [2024-12-14T06:41:00.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:46.872 [2024-12-14T06:41:00.864Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:10:46.872 Verification LBA range: start 0x0 length 0x2000 00:10:46.872 TLSTESTn1 : 10.01 6200.44 24.22 0.00 0.00 20611.11 4944.99 22639.71 00:10:46.872 [2024-12-14T06:41:00.864Z] =================================================================================================================== 00:10:46.872 [2024-12-14T06:41:00.864Z] Total : 6200.44 24.22 0.00 0.00 20611.11 4944.99 22639.71 00:10:46.872 0 00:10:46.872 06:41:00 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:46.872 06:41:00 -- target/tls.sh@45 -- # killprocess 64853 00:10:46.872 06:41:00 -- common/autotest_common.sh@936 -- # '[' -z 64853 ']' 00:10:46.872 06:41:00 -- common/autotest_common.sh@940 -- # kill -0 64853 00:10:46.872 06:41:00 -- common/autotest_common.sh@941 -- # uname 00:10:46.872 06:41:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:46.872 06:41:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64853 00:10:46.872 killing process with pid 64853 00:10:46.872 Received shutdown signal, test time was about 10.000000 seconds 00:10:46.872 00:10:46.872 Latency(us) 00:10:46.872 [2024-12-14T06:41:00.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:46.872 [2024-12-14T06:41:00.864Z] =================================================================================================================== 00:10:46.872 [2024-12-14T06:41:00.864Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:46.872 06:41:00 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:46.872 06:41:00 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:46.872 06:41:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64853' 00:10:46.872 06:41:00 -- common/autotest_common.sh@955 -- # kill 64853 00:10:46.872 06:41:00 -- common/autotest_common.sh@960 -- # wait 64853 00:10:46.872 06:41:00 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:46.872 06:41:00 -- common/autotest_common.sh@650 -- # local es=0 00:10:46.872 06:41:00 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:46.872 06:41:00 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:10:46.872 06:41:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:46.872 06:41:00 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:10:46.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:46.872 06:41:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:46.872 06:41:00 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:46.872 06:41:00 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:46.872 06:41:00 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:46.872 06:41:00 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:46.872 06:41:00 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:10:46.872 06:41:00 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:46.872 06:41:00 -- target/tls.sh@28 -- # bdevperf_pid=64986 00:10:46.872 06:41:00 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:46.872 06:41:00 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:46.872 06:41:00 -- target/tls.sh@31 -- # waitforlisten 64986 /var/tmp/bdevperf.sock 00:10:46.872 06:41:00 -- common/autotest_common.sh@829 -- # '[' -z 64986 ']' 00:10:46.872 06:41:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:46.872 06:41:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:46.872 06:41:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:46.872 06:41:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:46.872 06:41:00 -- common/autotest_common.sh@10 -- # set +x 00:10:46.872 [2024-12-14 06:41:00.370177] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:46.872 [2024-12-14 06:41:00.370629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64986 ] 00:10:46.872 [2024-12-14 06:41:00.501123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.872 [2024-12-14 06:41:00.552150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:47.482 06:41:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:47.482 06:41:01 -- common/autotest_common.sh@862 -- # return 0 00:10:47.482 06:41:01 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:10:47.741 [2024-12-14 06:41:01.572970] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:47.741 [2024-12-14 06:41:01.584356] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:10:47.741 [2024-12-14 06:41:01.584825] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e650 (107): Transport endpoint is not connected 00:10:47.741 [2024-12-14 06:41:01.585811] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e650 (9): Bad file descriptor 00:10:47.741 [2024-12-14 06:41:01.586807] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:10:47.741 [2024-12-14 06:41:01.587209] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:10:47.741 [2024-12-14 06:41:01.587436] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:10:47.741 request: 00:10:47.741 { 00:10:47.741 "name": "TLSTEST", 00:10:47.741 "trtype": "tcp", 00:10:47.741 "traddr": "10.0.0.2", 00:10:47.741 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:47.741 "adrfam": "ipv4", 00:10:47.741 "trsvcid": "4420", 00:10:47.741 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:47.741 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt", 00:10:47.741 "method": "bdev_nvme_attach_controller", 00:10:47.741 "req_id": 1 00:10:47.741 } 00:10:47.741 Got JSON-RPC error response 00:10:47.741 response: 00:10:47.741 { 00:10:47.741 "code": -32602, 00:10:47.741 "message": "Invalid parameters" 00:10:47.741 } 00:10:47.741 06:41:01 -- target/tls.sh@36 -- # killprocess 64986 00:10:47.741 06:41:01 -- common/autotest_common.sh@936 -- # '[' -z 64986 ']' 00:10:47.741 06:41:01 -- common/autotest_common.sh@940 -- # kill -0 64986 00:10:47.741 06:41:01 -- common/autotest_common.sh@941 -- # uname 00:10:47.741 06:41:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:47.741 06:41:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64986 00:10:47.741 killing process with pid 64986 00:10:47.741 Received shutdown signal, test time was about 10.000000 seconds 00:10:47.741 00:10:47.741 Latency(us) 00:10:47.741 [2024-12-14T06:41:01.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:47.741 [2024-12-14T06:41:01.733Z] =================================================================================================================== 00:10:47.741 [2024-12-14T06:41:01.733Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:47.741 06:41:01 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:47.741 06:41:01 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:47.741 06:41:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64986' 00:10:47.741 06:41:01 -- common/autotest_common.sh@955 -- # kill 64986 00:10:47.741 06:41:01 -- common/autotest_common.sh@960 -- # wait 64986 00:10:48.000 06:41:01 -- target/tls.sh@37 -- # return 1 00:10:48.000 06:41:01 -- common/autotest_common.sh@653 -- # es=1 00:10:48.000 06:41:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:48.000 06:41:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:48.000 06:41:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:48.000 06:41:01 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:48.000 06:41:01 -- common/autotest_common.sh@650 -- # local es=0 00:10:48.000 06:41:01 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:48.000 06:41:01 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:10:48.000 06:41:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:48.000 06:41:01 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:10:48.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:48.000 06:41:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:48.000 06:41:01 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:48.000 06:41:01 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:48.000 06:41:01 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:48.000 06:41:01 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:10:48.000 06:41:01 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:10:48.000 06:41:01 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:48.000 06:41:01 -- target/tls.sh@28 -- # bdevperf_pid=65008 00:10:48.000 06:41:01 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:48.000 06:41:01 -- target/tls.sh@31 -- # waitforlisten 65008 /var/tmp/bdevperf.sock 00:10:48.000 06:41:01 -- common/autotest_common.sh@829 -- # '[' -z 65008 ']' 00:10:48.001 06:41:01 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:48.001 06:41:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:48.001 06:41:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:48.001 06:41:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:48.001 06:41:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:48.001 06:41:01 -- common/autotest_common.sh@10 -- # set +x 00:10:48.001 [2024-12-14 06:41:01.869455] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:48.001 [2024-12-14 06:41:01.870445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65008 ] 00:10:48.259 [2024-12-14 06:41:02.003796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.259 [2024-12-14 06:41:02.055072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:48.827 06:41:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:49.086 06:41:02 -- common/autotest_common.sh@862 -- # return 0 00:10:49.086 06:41:02 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:49.086 [2024-12-14 06:41:03.059520] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:49.086 [2024-12-14 06:41:03.064841] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:10:49.086 [2024-12-14 06:41:03.065107] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:10:49.086 [2024-12-14 06:41:03.065308] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:10:49.086 [2024-12-14 06:41:03.065688] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1776650 (107): Transport endpoint is not connected 00:10:49.086 [2024-12-14 06:41:03.066670] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1776650 (9): Bad file descriptor 00:10:49.086 [2024-12-14 06:41:03.067666] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:10:49.086 [2024-12-14 06:41:03.067694] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:10:49.086 [2024-12-14 06:41:03.067720] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:10:49.086 request: 00:10:49.086 { 00:10:49.086 "name": "TLSTEST", 00:10:49.086 "trtype": "tcp", 00:10:49.086 "traddr": "10.0.0.2", 00:10:49.087 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:10:49.087 "adrfam": "ipv4", 00:10:49.087 "trsvcid": "4420", 00:10:49.087 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:49.087 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:10:49.087 "method": "bdev_nvme_attach_controller", 00:10:49.087 "req_id": 1 00:10:49.087 } 00:10:49.087 Got JSON-RPC error response 00:10:49.087 response: 00:10:49.087 { 00:10:49.087 "code": -32602, 00:10:49.087 "message": "Invalid parameters" 00:10:49.087 } 00:10:49.346 06:41:03 -- target/tls.sh@36 -- # killprocess 65008 00:10:49.346 06:41:03 -- common/autotest_common.sh@936 -- # '[' -z 65008 ']' 00:10:49.346 06:41:03 -- common/autotest_common.sh@940 -- # kill -0 65008 00:10:49.346 06:41:03 -- common/autotest_common.sh@941 -- # uname 00:10:49.346 06:41:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:49.346 06:41:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65008 00:10:49.346 killing process with pid 65008 00:10:49.346 Received shutdown signal, test time was about 10.000000 seconds 00:10:49.346 00:10:49.346 Latency(us) 00:10:49.346 [2024-12-14T06:41:03.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:49.346 [2024-12-14T06:41:03.338Z] =================================================================================================================== 00:10:49.346 [2024-12-14T06:41:03.338Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:49.346 06:41:03 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:49.346 06:41:03 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:49.346 06:41:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65008' 00:10:49.346 06:41:03 -- common/autotest_common.sh@955 -- # kill 65008 00:10:49.346 06:41:03 -- common/autotest_common.sh@960 -- # wait 65008 00:10:49.346 06:41:03 -- target/tls.sh@37 -- # return 1 00:10:49.346 06:41:03 -- common/autotest_common.sh@653 -- # es=1 00:10:49.346 06:41:03 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:49.346 06:41:03 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:49.346 06:41:03 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:49.346 06:41:03 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:49.346 06:41:03 -- common/autotest_common.sh@650 -- # local es=0 00:10:49.346 06:41:03 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:49.346 06:41:03 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:10:49.346 06:41:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:49.346 06:41:03 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:10:49.346 06:41:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:49.346 06:41:03 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:49.346 06:41:03 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:49.346 06:41:03 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:10:49.346 06:41:03 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:49.346 06:41:03 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:10:49.346 06:41:03 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:49.346 06:41:03 -- target/tls.sh@28 -- # bdevperf_pid=65036 00:10:49.346 06:41:03 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:49.346 06:41:03 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:49.346 06:41:03 -- target/tls.sh@31 -- # waitforlisten 65036 /var/tmp/bdevperf.sock 00:10:49.346 06:41:03 -- common/autotest_common.sh@829 -- # '[' -z 65036 ']' 00:10:49.346 06:41:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:49.346 06:41:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:49.346 06:41:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:49.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:49.346 06:41:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:49.346 06:41:03 -- common/autotest_common.sh@10 -- # set +x 00:10:49.605 [2024-12-14 06:41:03.348160] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:49.605 [2024-12-14 06:41:03.348823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65036 ] 00:10:49.605 [2024-12-14 06:41:03.483207] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.605 [2024-12-14 06:41:03.533694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.542 06:41:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:50.542 06:41:04 -- common/autotest_common.sh@862 -- # return 0 00:10:50.542 06:41:04 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:10:50.802 [2024-12-14 06:41:04.543060] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:50.802 [2024-12-14 06:41:04.554769] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:10:50.802 [2024-12-14 06:41:04.555007] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:10:50.802 [2024-12-14 06:41:04.555080] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:10:50.802 [2024-12-14 06:41:04.555809] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a22650 (107): Transport endpoint is not connected 00:10:50.802 [2024-12-14 06:41:04.556797] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a22650 (9): Bad file descriptor 00:10:50.802 [2024-12-14 06:41:04.557793] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:10:50.802 [2024-12-14 06:41:04.557820] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:10:50.802 [2024-12-14 06:41:04.557846] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:10:50.802 request: 00:10:50.802 { 00:10:50.802 "name": "TLSTEST", 00:10:50.802 "trtype": "tcp", 00:10:50.802 "traddr": "10.0.0.2", 00:10:50.802 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:50.802 "adrfam": "ipv4", 00:10:50.802 "trsvcid": "4420", 00:10:50.802 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:10:50.802 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:10:50.802 "method": "bdev_nvme_attach_controller", 00:10:50.802 "req_id": 1 00:10:50.802 } 00:10:50.802 Got JSON-RPC error response 00:10:50.802 response: 00:10:50.802 { 00:10:50.802 "code": -32602, 00:10:50.802 "message": "Invalid parameters" 00:10:50.802 } 00:10:50.802 06:41:04 -- target/tls.sh@36 -- # killprocess 65036 00:10:50.802 06:41:04 -- common/autotest_common.sh@936 -- # '[' -z 65036 ']' 00:10:50.802 06:41:04 -- common/autotest_common.sh@940 -- # kill -0 65036 00:10:50.802 06:41:04 -- common/autotest_common.sh@941 -- # uname 00:10:50.802 06:41:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:50.802 06:41:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65036 00:10:50.802 killing process with pid 65036 00:10:50.802 Received shutdown signal, test time was about 10.000000 seconds 00:10:50.802 00:10:50.802 Latency(us) 00:10:50.802 [2024-12-14T06:41:04.794Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:50.802 [2024-12-14T06:41:04.794Z] =================================================================================================================== 00:10:50.802 [2024-12-14T06:41:04.794Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:50.802 06:41:04 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:50.802 06:41:04 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:50.802 06:41:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65036' 00:10:50.802 06:41:04 -- common/autotest_common.sh@955 -- # kill 65036 00:10:50.802 06:41:04 -- common/autotest_common.sh@960 -- # wait 65036 00:10:50.802 06:41:04 -- target/tls.sh@37 -- # return 1 00:10:50.802 06:41:04 -- common/autotest_common.sh@653 -- # es=1 00:10:50.802 06:41:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:50.802 06:41:04 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:50.802 06:41:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:50.802 06:41:04 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:10:50.802 06:41:04 -- common/autotest_common.sh@650 -- # local es=0 00:10:50.802 06:41:04 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:10:50.802 06:41:04 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:10:50.802 06:41:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:50.802 06:41:04 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:10:50.802 06:41:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:50.802 06:41:04 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:10:50.802 06:41:04 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:50.802 06:41:04 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:50.802 06:41:04 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:50.802 06:41:04 -- target/tls.sh@23 -- # psk= 00:10:50.802 06:41:04 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:50.802 06:41:04 -- target/tls.sh@28 -- # bdevperf_pid=65063 00:10:50.802 06:41:04 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:50.802 06:41:04 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:50.802 06:41:04 -- target/tls.sh@31 -- # waitforlisten 65063 /var/tmp/bdevperf.sock 00:10:50.802 06:41:04 -- common/autotest_common.sh@829 -- # '[' -z 65063 ']' 00:10:50.802 06:41:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:50.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:50.802 06:41:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:50.802 06:41:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:50.802 06:41:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:50.802 06:41:04 -- common/autotest_common.sh@10 -- # set +x 00:10:51.061 [2024-12-14 06:41:04.838625] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:51.061 [2024-12-14 06:41:04.838731] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65063 ] 00:10:51.061 [2024-12-14 06:41:04.978000] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.061 [2024-12-14 06:41:05.029244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.997 06:41:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:51.997 06:41:05 -- common/autotest_common.sh@862 -- # return 0 00:10:51.997 06:41:05 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:10:51.997 [2024-12-14 06:41:05.984936] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:10:51.997 [2024-12-14 06:41:05.986466] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1e010 (9): Bad file descriptor 00:10:52.256 [2024-12-14 06:41:05.987463] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:10:52.256 [2024-12-14 06:41:05.987930] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:10:52.256 [2024-12-14 06:41:05.988170] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:10:52.256 request: 00:10:52.256 { 00:10:52.256 "name": "TLSTEST", 00:10:52.256 "trtype": "tcp", 00:10:52.256 "traddr": "10.0.0.2", 00:10:52.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:52.256 "adrfam": "ipv4", 00:10:52.256 "trsvcid": "4420", 00:10:52.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:52.256 "method": "bdev_nvme_attach_controller", 00:10:52.256 "req_id": 1 00:10:52.256 } 00:10:52.256 Got JSON-RPC error response 00:10:52.256 response: 00:10:52.256 { 00:10:52.256 "code": -32602, 00:10:52.257 "message": "Invalid parameters" 00:10:52.257 } 00:10:52.257 06:41:06 -- target/tls.sh@36 -- # killprocess 65063 00:10:52.257 06:41:06 -- common/autotest_common.sh@936 -- # '[' -z 65063 ']' 00:10:52.257 06:41:06 -- common/autotest_common.sh@940 -- # kill -0 65063 00:10:52.257 06:41:06 -- common/autotest_common.sh@941 -- # uname 00:10:52.257 06:41:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:52.257 06:41:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65063 00:10:52.257 killing process with pid 65063 00:10:52.257 Received shutdown signal, test time was about 10.000000 seconds 00:10:52.257 00:10:52.257 Latency(us) 00:10:52.257 [2024-12-14T06:41:06.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:52.257 [2024-12-14T06:41:06.249Z] =================================================================================================================== 00:10:52.257 [2024-12-14T06:41:06.249Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:52.257 06:41:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:10:52.257 06:41:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:10:52.257 06:41:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65063' 00:10:52.257 06:41:06 -- common/autotest_common.sh@955 -- # kill 65063 00:10:52.257 06:41:06 -- common/autotest_common.sh@960 -- # wait 65063 00:10:52.257 06:41:06 -- target/tls.sh@37 -- # return 1 00:10:52.257 06:41:06 -- common/autotest_common.sh@653 -- # es=1 00:10:52.257 06:41:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:52.257 06:41:06 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:52.257 06:41:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:52.257 06:41:06 -- target/tls.sh@167 -- # killprocess 64609 00:10:52.257 06:41:06 -- common/autotest_common.sh@936 -- # '[' -z 64609 ']' 00:10:52.257 06:41:06 -- common/autotest_common.sh@940 -- # kill -0 64609 00:10:52.257 06:41:06 -- common/autotest_common.sh@941 -- # uname 00:10:52.257 06:41:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:52.257 06:41:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64609 00:10:52.257 killing process with pid 64609 00:10:52.257 06:41:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:52.257 06:41:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:52.257 06:41:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64609' 00:10:52.257 06:41:06 -- common/autotest_common.sh@955 -- # kill 64609 00:10:52.257 06:41:06 -- common/autotest_common.sh@960 -- # wait 64609 00:10:52.516 06:41:06 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:10:52.516 06:41:06 -- target/tls.sh@49 -- # local key hash crc 00:10:52.516 06:41:06 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:10:52.516 06:41:06 -- target/tls.sh@51 -- # hash=02 00:10:52.516 06:41:06 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:10:52.516 06:41:06 -- target/tls.sh@52 -- # gzip -1 -c 00:10:52.516 06:41:06 -- target/tls.sh@52 -- # tail -c8 00:10:52.517 06:41:06 -- target/tls.sh@52 -- # head -c 4 00:10:52.517 06:41:06 -- target/tls.sh@52 -- # crc='�e�'\''' 00:10:52.517 06:41:06 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:10:52.517 06:41:06 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:10:52.517 06:41:06 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:10:52.517 06:41:06 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:10:52.517 06:41:06 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:52.517 06:41:06 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:10:52.517 06:41:06 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:52.517 06:41:06 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:10:52.517 06:41:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:52.517 06:41:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:52.517 06:41:06 -- common/autotest_common.sh@10 -- # set +x 00:10:52.517 06:41:06 -- nvmf/common.sh@469 -- # nvmfpid=65106 00:10:52.517 06:41:06 -- nvmf/common.sh@470 -- # waitforlisten 65106 00:10:52.517 06:41:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:52.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.517 06:41:06 -- common/autotest_common.sh@829 -- # '[' -z 65106 ']' 00:10:52.517 06:41:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.517 06:41:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:52.517 06:41:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.517 06:41:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:52.517 06:41:06 -- common/autotest_common.sh@10 -- # set +x 00:10:52.776 [2024-12-14 06:41:06.508386] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:52.776 [2024-12-14 06:41:06.508784] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.776 [2024-12-14 06:41:06.645938] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.776 [2024-12-14 06:41:06.698885] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:52.776 [2024-12-14 06:41:06.699309] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:52.776 [2024-12-14 06:41:06.699331] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:52.776 [2024-12-14 06:41:06.699342] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:52.776 [2024-12-14 06:41:06.699391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.713 06:41:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:53.713 06:41:07 -- common/autotest_common.sh@862 -- # return 0 00:10:53.713 06:41:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:53.713 06:41:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:53.713 06:41:07 -- common/autotest_common.sh@10 -- # set +x 00:10:53.713 06:41:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:53.713 06:41:07 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:53.713 06:41:07 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:53.713 06:41:07 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:10:53.972 [2024-12-14 06:41:07.742770] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:53.972 06:41:07 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:10:54.232 06:41:08 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:10:54.232 [2024-12-14 06:41:08.214852] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:10:54.232 [2024-12-14 06:41:08.215100] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:54.490 06:41:08 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:10:54.490 malloc0 00:10:54.490 06:41:08 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:54.748 06:41:08 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:55.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:55.006 06:41:08 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:55.006 06:41:08 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:10:55.006 06:41:08 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:10:55.006 06:41:08 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:10:55.006 06:41:08 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:10:55.006 06:41:08 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:55.006 06:41:08 -- target/tls.sh@28 -- # bdevperf_pid=65159 00:10:55.006 06:41:08 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:10:55.006 06:41:08 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:55.006 06:41:08 -- target/tls.sh@31 -- # waitforlisten 65159 /var/tmp/bdevperf.sock 00:10:55.006 06:41:08 -- common/autotest_common.sh@829 -- # '[' -z 65159 ']' 00:10:55.006 06:41:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:55.006 06:41:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:55.006 06:41:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:55.006 06:41:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:55.006 06:41:08 -- common/autotest_common.sh@10 -- # set +x 00:10:55.006 [2024-12-14 06:41:08.913025] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:55.006 [2024-12-14 06:41:08.913320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65159 ] 00:10:55.265 [2024-12-14 06:41:09.046774] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.265 [2024-12-14 06:41:09.115532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.833 06:41:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:55.833 06:41:09 -- common/autotest_common.sh@862 -- # return 0 00:10:55.833 06:41:09 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:10:56.092 [2024-12-14 06:41:10.027506] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:10:56.351 TLSTESTn1 00:10:56.351 06:41:10 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:10:56.351 Running I/O for 10 seconds... 00:11:06.330 00:11:06.330 Latency(us) 00:11:06.330 [2024-12-14T06:41:20.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:06.330 [2024-12-14T06:41:20.322Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:06.330 Verification LBA range: start 0x0 length 0x2000 00:11:06.330 TLSTESTn1 : 10.02 6098.91 23.82 0.00 0.00 20948.71 4944.99 21090.68 00:11:06.330 [2024-12-14T06:41:20.323Z] =================================================================================================================== 00:11:06.331 [2024-12-14T06:41:20.323Z] Total : 6098.91 23.82 0.00 0.00 20948.71 4944.99 21090.68 00:11:06.331 0 00:11:06.331 06:41:20 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:06.331 06:41:20 -- target/tls.sh@45 -- # killprocess 65159 00:11:06.331 06:41:20 -- common/autotest_common.sh@936 -- # '[' -z 65159 ']' 00:11:06.331 06:41:20 -- common/autotest_common.sh@940 -- # kill -0 65159 00:11:06.331 06:41:20 -- common/autotest_common.sh@941 -- # uname 00:11:06.331 06:41:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:06.331 06:41:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65159 00:11:06.331 killing process with pid 65159 00:11:06.331 Received shutdown signal, test time was about 10.000000 seconds 00:11:06.331 00:11:06.331 Latency(us) 00:11:06.331 [2024-12-14T06:41:20.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:06.331 [2024-12-14T06:41:20.323Z] =================================================================================================================== 00:11:06.331 [2024-12-14T06:41:20.323Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:06.331 06:41:20 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:06.331 06:41:20 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:06.331 06:41:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65159' 00:11:06.331 06:41:20 -- common/autotest_common.sh@955 -- # kill 65159 00:11:06.331 06:41:20 -- common/autotest_common.sh@960 -- # wait 65159 00:11:06.590 06:41:20 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:06.590 06:41:20 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:06.590 06:41:20 -- common/autotest_common.sh@650 -- # local es=0 00:11:06.590 06:41:20 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:06.590 06:41:20 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:11:06.590 06:41:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:06.590 06:41:20 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:11:06.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:06.590 06:41:20 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:06.590 06:41:20 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:06.590 06:41:20 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:06.590 06:41:20 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:06.590 06:41:20 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:06.590 06:41:20 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:11:06.590 06:41:20 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:06.590 06:41:20 -- target/tls.sh@28 -- # bdevperf_pid=65295 00:11:06.590 06:41:20 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:06.590 06:41:20 -- target/tls.sh@31 -- # waitforlisten 65295 /var/tmp/bdevperf.sock 00:11:06.590 06:41:20 -- common/autotest_common.sh@829 -- # '[' -z 65295 ']' 00:11:06.590 06:41:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:06.590 06:41:20 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:06.590 06:41:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:06.590 06:41:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:06.590 06:41:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:06.590 06:41:20 -- common/autotest_common.sh@10 -- # set +x 00:11:06.590 [2024-12-14 06:41:20.517485] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:06.590 [2024-12-14 06:41:20.517803] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65295 ] 00:11:06.849 [2024-12-14 06:41:20.654238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.849 [2024-12-14 06:41:20.708469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.849 06:41:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:06.849 06:41:20 -- common/autotest_common.sh@862 -- # return 0 00:11:06.849 06:41:20 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:07.107 [2024-12-14 06:41:21.040731] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:07.107 [2024-12-14 06:41:21.041306] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:11:07.107 request: 00:11:07.107 { 00:11:07.107 "name": "TLSTEST", 00:11:07.107 "trtype": "tcp", 00:11:07.107 "traddr": "10.0.0.2", 00:11:07.107 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:07.107 "adrfam": "ipv4", 00:11:07.107 "trsvcid": "4420", 00:11:07.107 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:07.107 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:07.107 "method": "bdev_nvme_attach_controller", 00:11:07.107 "req_id": 1 00:11:07.107 } 00:11:07.107 Got JSON-RPC error response 00:11:07.107 response: 00:11:07.107 { 00:11:07.107 "code": -22, 00:11:07.107 "message": "Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:11:07.107 } 00:11:07.107 06:41:21 -- target/tls.sh@36 -- # killprocess 65295 00:11:07.107 06:41:21 -- common/autotest_common.sh@936 -- # '[' -z 65295 ']' 00:11:07.107 06:41:21 -- common/autotest_common.sh@940 -- # kill -0 65295 00:11:07.107 06:41:21 -- common/autotest_common.sh@941 -- # uname 00:11:07.107 06:41:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:07.107 06:41:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65295 00:11:07.367 killing process with pid 65295 00:11:07.367 Received shutdown signal, test time was about 10.000000 seconds 00:11:07.367 00:11:07.367 Latency(us) 00:11:07.367 [2024-12-14T06:41:21.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:07.367 [2024-12-14T06:41:21.359Z] =================================================================================================================== 00:11:07.367 [2024-12-14T06:41:21.359Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:07.367 06:41:21 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:07.367 06:41:21 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:07.367 06:41:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65295' 00:11:07.367 06:41:21 -- common/autotest_common.sh@955 -- # kill 65295 00:11:07.367 06:41:21 -- common/autotest_common.sh@960 -- # wait 65295 00:11:07.367 06:41:21 -- target/tls.sh@37 -- # return 1 00:11:07.367 06:41:21 -- common/autotest_common.sh@653 -- # es=1 00:11:07.367 06:41:21 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:07.367 06:41:21 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:07.367 06:41:21 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:07.367 06:41:21 -- target/tls.sh@183 -- # killprocess 65106 00:11:07.367 06:41:21 -- common/autotest_common.sh@936 -- # '[' -z 65106 ']' 00:11:07.367 06:41:21 -- common/autotest_common.sh@940 -- # kill -0 65106 00:11:07.367 06:41:21 -- common/autotest_common.sh@941 -- # uname 00:11:07.367 06:41:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:07.367 06:41:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65106 00:11:07.367 killing process with pid 65106 00:11:07.367 06:41:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:07.367 06:41:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:07.367 06:41:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65106' 00:11:07.367 06:41:21 -- common/autotest_common.sh@955 -- # kill 65106 00:11:07.367 06:41:21 -- common/autotest_common.sh@960 -- # wait 65106 00:11:07.626 06:41:21 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:11:07.626 06:41:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:07.626 06:41:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:07.626 06:41:21 -- common/autotest_common.sh@10 -- # set +x 00:11:07.626 06:41:21 -- nvmf/common.sh@469 -- # nvmfpid=65320 00:11:07.626 06:41:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:07.626 06:41:21 -- nvmf/common.sh@470 -- # waitforlisten 65320 00:11:07.626 06:41:21 -- common/autotest_common.sh@829 -- # '[' -z 65320 ']' 00:11:07.626 06:41:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.626 06:41:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:07.626 06:41:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.626 06:41:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:07.626 06:41:21 -- common/autotest_common.sh@10 -- # set +x 00:11:07.626 [2024-12-14 06:41:21.551562] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:07.626 [2024-12-14 06:41:21.551903] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.884 [2024-12-14 06:41:21.683472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.884 [2024-12-14 06:41:21.739658] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:07.885 [2024-12-14 06:41:21.739818] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.885 [2024-12-14 06:41:21.739832] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.885 [2024-12-14 06:41:21.739841] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:07.885 [2024-12-14 06:41:21.739885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.821 06:41:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:08.821 06:41:22 -- common/autotest_common.sh@862 -- # return 0 00:11:08.821 06:41:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:08.821 06:41:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:08.821 06:41:22 -- common/autotest_common.sh@10 -- # set +x 00:11:08.821 06:41:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:08.821 06:41:22 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:08.821 06:41:22 -- common/autotest_common.sh@650 -- # local es=0 00:11:08.821 06:41:22 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:08.821 06:41:22 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:11:08.821 06:41:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:08.821 06:41:22 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:11:08.821 06:41:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:08.821 06:41:22 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:08.821 06:41:22 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:08.821 06:41:22 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:09.079 [2024-12-14 06:41:22.822374] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.079 06:41:22 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:09.338 06:41:23 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:09.338 [2024-12-14 06:41:23.282503] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:09.338 [2024-12-14 06:41:23.282710] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.338 06:41:23 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:09.597 malloc0 00:11:09.597 06:41:23 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:09.856 06:41:23 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:10.114 [2024-12-14 06:41:24.040513] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:11:10.114 [2024-12-14 06:41:24.040557] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:11:10.114 [2024-12-14 06:41:24.040590] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:11:10.114 request: 00:11:10.114 { 00:11:10.114 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:10.115 "host": "nqn.2016-06.io.spdk:host1", 00:11:10.115 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:10.115 "method": "nvmf_subsystem_add_host", 00:11:10.115 "req_id": 1 00:11:10.115 } 00:11:10.115 Got JSON-RPC error response 00:11:10.115 response: 00:11:10.115 { 00:11:10.115 "code": -32603, 00:11:10.115 "message": "Internal error" 00:11:10.115 } 00:11:10.115 06:41:24 -- common/autotest_common.sh@653 -- # es=1 00:11:10.115 06:41:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:10.115 06:41:24 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:10.115 06:41:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:10.115 06:41:24 -- target/tls.sh@189 -- # killprocess 65320 00:11:10.115 06:41:24 -- common/autotest_common.sh@936 -- # '[' -z 65320 ']' 00:11:10.115 06:41:24 -- common/autotest_common.sh@940 -- # kill -0 65320 00:11:10.115 06:41:24 -- common/autotest_common.sh@941 -- # uname 00:11:10.115 06:41:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:10.115 06:41:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65320 00:11:10.115 killing process with pid 65320 00:11:10.115 06:41:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:10.115 06:41:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:10.115 06:41:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65320' 00:11:10.115 06:41:24 -- common/autotest_common.sh@955 -- # kill 65320 00:11:10.115 06:41:24 -- common/autotest_common.sh@960 -- # wait 65320 00:11:10.374 06:41:24 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:10.374 06:41:24 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:11:10.374 06:41:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:10.374 06:41:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:10.374 06:41:24 -- common/autotest_common.sh@10 -- # set +x 00:11:10.374 06:41:24 -- nvmf/common.sh@469 -- # nvmfpid=65382 00:11:10.374 06:41:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:10.374 06:41:24 -- nvmf/common.sh@470 -- # waitforlisten 65382 00:11:10.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.374 06:41:24 -- common/autotest_common.sh@829 -- # '[' -z 65382 ']' 00:11:10.374 06:41:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.374 06:41:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:10.374 06:41:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.374 06:41:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:10.374 06:41:24 -- common/autotest_common.sh@10 -- # set +x 00:11:10.374 [2024-12-14 06:41:24.330791] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:10.374 [2024-12-14 06:41:24.331111] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.633 [2024-12-14 06:41:24.462334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.633 [2024-12-14 06:41:24.512150] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:10.633 [2024-12-14 06:41:24.512533] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:10.633 [2024-12-14 06:41:24.512581] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:10.633 [2024-12-14 06:41:24.512665] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:10.633 [2024-12-14 06:41:24.512701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.569 06:41:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:11.569 06:41:25 -- common/autotest_common.sh@862 -- # return 0 00:11:11.569 06:41:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:11.569 06:41:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:11.569 06:41:25 -- common/autotest_common.sh@10 -- # set +x 00:11:11.569 06:41:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.569 06:41:25 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:11.569 06:41:25 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:11.569 06:41:25 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:11.569 [2024-12-14 06:41:25.556240] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:11.828 06:41:25 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:11.828 06:41:25 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:12.086 [2024-12-14 06:41:25.976278] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:12.086 [2024-12-14 06:41:25.976479] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:12.086 06:41:25 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:12.345 malloc0 00:11:12.345 06:41:26 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:12.604 06:41:26 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:12.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:12.862 06:41:26 -- target/tls.sh@197 -- # bdevperf_pid=65437 00:11:12.863 06:41:26 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:12.863 06:41:26 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:12.863 06:41:26 -- target/tls.sh@200 -- # waitforlisten 65437 /var/tmp/bdevperf.sock 00:11:12.863 06:41:26 -- common/autotest_common.sh@829 -- # '[' -z 65437 ']' 00:11:12.863 06:41:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:12.863 06:41:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:12.863 06:41:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:12.863 06:41:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:12.863 06:41:26 -- common/autotest_common.sh@10 -- # set +x 00:11:12.863 [2024-12-14 06:41:26.787297] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:12.863 [2024-12-14 06:41:26.787544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65437 ] 00:11:13.121 [2024-12-14 06:41:26.922492] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.121 [2024-12-14 06:41:26.990771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:14.057 06:41:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:14.057 06:41:27 -- common/autotest_common.sh@862 -- # return 0 00:11:14.057 06:41:27 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:14.057 [2024-12-14 06:41:27.880762] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:14.057 TLSTESTn1 00:11:14.057 06:41:27 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:14.626 06:41:28 -- target/tls.sh@205 -- # tgtconf='{ 00:11:14.626 "subsystems": [ 00:11:14.626 { 00:11:14.626 "subsystem": "iobuf", 00:11:14.626 "config": [ 00:11:14.626 { 00:11:14.626 "method": "iobuf_set_options", 00:11:14.626 "params": { 00:11:14.626 "small_pool_count": 8192, 00:11:14.626 "large_pool_count": 1024, 00:11:14.626 "small_bufsize": 8192, 00:11:14.626 "large_bufsize": 135168 00:11:14.626 } 00:11:14.626 } 00:11:14.626 ] 00:11:14.626 }, 00:11:14.626 { 00:11:14.626 "subsystem": "sock", 00:11:14.626 "config": [ 00:11:14.626 { 00:11:14.626 "method": "sock_impl_set_options", 00:11:14.626 "params": { 00:11:14.626 "impl_name": "uring", 00:11:14.626 "recv_buf_size": 2097152, 00:11:14.626 "send_buf_size": 2097152, 00:11:14.626 "enable_recv_pipe": true, 00:11:14.626 "enable_quickack": false, 00:11:14.626 "enable_placement_id": 0, 00:11:14.626 "enable_zerocopy_send_server": false, 00:11:14.626 "enable_zerocopy_send_client": false, 00:11:14.626 "zerocopy_threshold": 0, 00:11:14.626 "tls_version": 0, 00:11:14.626 "enable_ktls": false 00:11:14.626 } 00:11:14.626 }, 00:11:14.626 { 00:11:14.626 "method": "sock_impl_set_options", 00:11:14.626 "params": { 00:11:14.626 "impl_name": "posix", 00:11:14.626 "recv_buf_size": 2097152, 00:11:14.626 "send_buf_size": 2097152, 00:11:14.626 "enable_recv_pipe": true, 00:11:14.626 "enable_quickack": false, 00:11:14.626 "enable_placement_id": 0, 00:11:14.626 "enable_zerocopy_send_server": true, 00:11:14.626 "enable_zerocopy_send_client": false, 00:11:14.626 "zerocopy_threshold": 0, 00:11:14.626 "tls_version": 0, 00:11:14.626 "enable_ktls": false 00:11:14.626 } 00:11:14.626 }, 00:11:14.626 { 00:11:14.626 "method": "sock_impl_set_options", 00:11:14.626 "params": { 00:11:14.626 "impl_name": "ssl", 00:11:14.626 "recv_buf_size": 4096, 00:11:14.626 "send_buf_size": 4096, 00:11:14.626 "enable_recv_pipe": true, 00:11:14.626 "enable_quickack": false, 00:11:14.626 "enable_placement_id": 0, 00:11:14.626 "enable_zerocopy_send_server": true, 00:11:14.626 "enable_zerocopy_send_client": false, 00:11:14.626 "zerocopy_threshold": 0, 00:11:14.626 "tls_version": 0, 00:11:14.626 "enable_ktls": false 00:11:14.626 } 00:11:14.626 } 00:11:14.626 ] 00:11:14.626 }, 00:11:14.626 { 00:11:14.626 "subsystem": "vmd", 00:11:14.626 "config": [] 00:11:14.626 }, 00:11:14.626 { 00:11:14.626 "subsystem": "accel", 00:11:14.626 "config": [ 00:11:14.626 { 00:11:14.626 "method": "accel_set_options", 00:11:14.626 "params": { 00:11:14.626 "small_cache_size": 128, 00:11:14.626 "large_cache_size": 16, 00:11:14.626 "task_count": 2048, 00:11:14.626 "sequence_count": 2048, 00:11:14.626 "buf_count": 2048 00:11:14.626 } 00:11:14.626 } 00:11:14.626 ] 00:11:14.626 }, 00:11:14.626 { 00:11:14.626 "subsystem": "bdev", 00:11:14.626 "config": [ 00:11:14.626 { 00:11:14.626 "method": "bdev_set_options", 00:11:14.626 "params": { 00:11:14.626 "bdev_io_pool_size": 65535, 00:11:14.626 "bdev_io_cache_size": 256, 00:11:14.626 "bdev_auto_examine": true, 00:11:14.626 "iobuf_small_cache_size": 128, 00:11:14.626 "iobuf_large_cache_size": 16 00:11:14.626 } 00:11:14.626 }, 00:11:14.626 { 00:11:14.626 "method": "bdev_raid_set_options", 00:11:14.626 "params": { 00:11:14.626 "process_window_size_kb": 1024 00:11:14.626 } 00:11:14.626 }, 00:11:14.626 { 00:11:14.626 "method": "bdev_iscsi_set_options", 00:11:14.626 "params": { 00:11:14.626 "timeout_sec": 30 00:11:14.626 } 00:11:14.626 }, 00:11:14.626 { 00:11:14.626 "method": "bdev_nvme_set_options", 00:11:14.626 "params": { 00:11:14.626 "action_on_timeout": "none", 00:11:14.626 "timeout_us": 0, 00:11:14.626 "timeout_admin_us": 0, 00:11:14.626 "keep_alive_timeout_ms": 10000, 00:11:14.626 "transport_retry_count": 4, 00:11:14.626 "arbitration_burst": 0, 00:11:14.626 "low_priority_weight": 0, 00:11:14.626 "medium_priority_weight": 0, 00:11:14.626 "high_priority_weight": 0, 00:11:14.626 "nvme_adminq_poll_period_us": 10000, 00:11:14.626 "nvme_ioq_poll_period_us": 0, 00:11:14.626 "io_queue_requests": 0, 00:11:14.626 "delay_cmd_submit": true, 00:11:14.626 "bdev_retry_count": 3, 00:11:14.626 "transport_ack_timeout": 0, 00:11:14.626 "ctrlr_loss_timeout_sec": 0, 00:11:14.626 "reconnect_delay_sec": 0, 00:11:14.626 "fast_io_fail_timeout_sec": 0, 00:11:14.626 "generate_uuids": false, 00:11:14.626 "transport_tos": 0, 00:11:14.626 "io_path_stat": false, 00:11:14.626 "allow_accel_sequence": false 00:11:14.626 } 00:11:14.626 }, 00:11:14.626 { 00:11:14.626 "method": "bdev_nvme_set_hotplug", 00:11:14.626 "params": { 00:11:14.626 "period_us": 100000, 00:11:14.626 "enable": false 00:11:14.626 } 00:11:14.626 }, 00:11:14.626 { 00:11:14.626 "method": "bdev_malloc_create", 00:11:14.626 "params": { 00:11:14.626 "name": "malloc0", 00:11:14.626 "num_blocks": 8192, 00:11:14.626 "block_size": 4096, 00:11:14.626 "physical_block_size": 4096, 00:11:14.626 "uuid": "648df490-07e5-4b3d-bd0b-2124acd1d7f2", 00:11:14.626 "optimal_io_boundary": 0 00:11:14.626 } 00:11:14.626 }, 00:11:14.626 { 00:11:14.626 "method": "bdev_wait_for_examine" 00:11:14.626 } 00:11:14.626 ] 00:11:14.626 }, 00:11:14.626 { 00:11:14.626 "subsystem": "nbd", 00:11:14.627 "config": [] 00:11:14.627 }, 00:11:14.627 { 00:11:14.627 "subsystem": "scheduler", 00:11:14.627 "config": [ 00:11:14.627 { 00:11:14.627 "method": "framework_set_scheduler", 00:11:14.627 "params": { 00:11:14.627 "name": "static" 00:11:14.627 } 00:11:14.627 } 00:11:14.627 ] 00:11:14.627 }, 00:11:14.627 { 00:11:14.627 "subsystem": "nvmf", 00:11:14.627 "config": [ 00:11:14.627 { 00:11:14.627 "method": "nvmf_set_config", 00:11:14.627 "params": { 00:11:14.627 "discovery_filter": "match_any", 00:11:14.627 "admin_cmd_passthru": { 00:11:14.627 "identify_ctrlr": false 00:11:14.627 } 00:11:14.627 } 00:11:14.627 }, 00:11:14.627 { 00:11:14.627 "method": "nvmf_set_max_subsystems", 00:11:14.627 "params": { 00:11:14.627 "max_subsystems": 1024 00:11:14.627 } 00:11:14.627 }, 00:11:14.627 { 00:11:14.627 "method": "nvmf_set_crdt", 00:11:14.627 "params": { 00:11:14.627 "crdt1": 0, 00:11:14.627 "crdt2": 0, 00:11:14.627 "crdt3": 0 00:11:14.627 } 00:11:14.627 }, 00:11:14.627 { 00:11:14.627 "method": "nvmf_create_transport", 00:11:14.627 "params": { 00:11:14.627 "trtype": "TCP", 00:11:14.627 "max_queue_depth": 128, 00:11:14.627 "max_io_qpairs_per_ctrlr": 127, 00:11:14.627 "in_capsule_data_size": 4096, 00:11:14.627 "max_io_size": 131072, 00:11:14.627 "io_unit_size": 131072, 00:11:14.627 "max_aq_depth": 128, 00:11:14.627 "num_shared_buffers": 511, 00:11:14.627 "buf_cache_size": 4294967295, 00:11:14.627 "dif_insert_or_strip": false, 00:11:14.627 "zcopy": false, 00:11:14.627 "c2h_success": false, 00:11:14.627 "sock_priority": 0, 00:11:14.627 "abort_timeout_sec": 1 00:11:14.627 } 00:11:14.627 }, 00:11:14.627 { 00:11:14.627 "method": "nvmf_create_subsystem", 00:11:14.627 "params": { 00:11:14.627 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:14.627 "allow_any_host": false, 00:11:14.627 "serial_number": "SPDK00000000000001", 00:11:14.627 "model_number": "SPDK bdev Controller", 00:11:14.627 "max_namespaces": 10, 00:11:14.627 "min_cntlid": 1, 00:11:14.627 "max_cntlid": 65519, 00:11:14.627 "ana_reporting": false 00:11:14.627 } 00:11:14.627 }, 00:11:14.627 { 00:11:14.627 "method": "nvmf_subsystem_add_host", 00:11:14.627 "params": { 00:11:14.627 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:14.627 "host": "nqn.2016-06.io.spdk:host1", 00:11:14.627 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:11:14.627 } 00:11:14.627 }, 00:11:14.627 { 00:11:14.627 "method": "nvmf_subsystem_add_ns", 00:11:14.627 "params": { 00:11:14.627 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:14.627 "namespace": { 00:11:14.627 "nsid": 1, 00:11:14.627 "bdev_name": "malloc0", 00:11:14.627 "nguid": "648DF49007E54B3DBD0B2124ACD1D7F2", 00:11:14.627 "uuid": "648df490-07e5-4b3d-bd0b-2124acd1d7f2" 00:11:14.627 } 00:11:14.627 } 00:11:14.627 }, 00:11:14.627 { 00:11:14.627 "method": "nvmf_subsystem_add_listener", 00:11:14.627 "params": { 00:11:14.627 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:14.627 "listen_address": { 00:11:14.627 "trtype": "TCP", 00:11:14.627 "adrfam": "IPv4", 00:11:14.627 "traddr": "10.0.0.2", 00:11:14.627 "trsvcid": "4420" 00:11:14.627 }, 00:11:14.627 "secure_channel": true 00:11:14.627 } 00:11:14.627 } 00:11:14.627 ] 00:11:14.627 } 00:11:14.627 ] 00:11:14.627 }' 00:11:14.627 06:41:28 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:11:14.627 06:41:28 -- target/tls.sh@206 -- # bdevperfconf='{ 00:11:14.627 "subsystems": [ 00:11:14.627 { 00:11:14.627 "subsystem": "iobuf", 00:11:14.627 "config": [ 00:11:14.627 { 00:11:14.627 "method": "iobuf_set_options", 00:11:14.627 "params": { 00:11:14.627 "small_pool_count": 8192, 00:11:14.627 "large_pool_count": 1024, 00:11:14.627 "small_bufsize": 8192, 00:11:14.627 "large_bufsize": 135168 00:11:14.627 } 00:11:14.627 } 00:11:14.627 ] 00:11:14.627 }, 00:11:14.627 { 00:11:14.627 "subsystem": "sock", 00:11:14.627 "config": [ 00:11:14.627 { 00:11:14.627 "method": "sock_impl_set_options", 00:11:14.627 "params": { 00:11:14.627 "impl_name": "uring", 00:11:14.627 "recv_buf_size": 2097152, 00:11:14.627 "send_buf_size": 2097152, 00:11:14.627 "enable_recv_pipe": true, 00:11:14.627 "enable_quickack": false, 00:11:14.627 "enable_placement_id": 0, 00:11:14.627 "enable_zerocopy_send_server": false, 00:11:14.627 "enable_zerocopy_send_client": false, 00:11:14.627 "zerocopy_threshold": 0, 00:11:14.627 "tls_version": 0, 00:11:14.627 "enable_ktls": false 00:11:14.627 } 00:11:14.627 }, 00:11:14.627 { 00:11:14.627 "method": "sock_impl_set_options", 00:11:14.627 "params": { 00:11:14.627 "impl_name": "posix", 00:11:14.627 "recv_buf_size": 2097152, 00:11:14.627 "send_buf_size": 2097152, 00:11:14.627 "enable_recv_pipe": true, 00:11:14.627 "enable_quickack": false, 00:11:14.627 "enable_placement_id": 0, 00:11:14.627 "enable_zerocopy_send_server": true, 00:11:14.627 "enable_zerocopy_send_client": false, 00:11:14.627 "zerocopy_threshold": 0, 00:11:14.627 "tls_version": 0, 00:11:14.627 "enable_ktls": false 00:11:14.627 } 00:11:14.627 }, 00:11:14.627 { 00:11:14.627 "method": "sock_impl_set_options", 00:11:14.627 "params": { 00:11:14.627 "impl_name": "ssl", 00:11:14.627 "recv_buf_size": 4096, 00:11:14.627 "send_buf_size": 4096, 00:11:14.627 "enable_recv_pipe": true, 00:11:14.627 "enable_quickack": false, 00:11:14.627 "enable_placement_id": 0, 00:11:14.627 "enable_zerocopy_send_server": true, 00:11:14.627 "enable_zerocopy_send_client": false, 00:11:14.627 "zerocopy_threshold": 0, 00:11:14.627 "tls_version": 0, 00:11:14.627 "enable_ktls": false 00:11:14.627 } 00:11:14.627 } 00:11:14.627 ] 00:11:14.627 }, 00:11:14.627 { 00:11:14.627 "subsystem": "vmd", 00:11:14.627 "config": [] 00:11:14.627 }, 00:11:14.627 { 00:11:14.627 "subsystem": "accel", 00:11:14.627 "config": [ 00:11:14.627 { 00:11:14.627 "method": "accel_set_options", 00:11:14.627 "params": { 00:11:14.627 "small_cache_size": 128, 00:11:14.627 "large_cache_size": 16, 00:11:14.627 "task_count": 2048, 00:11:14.627 "sequence_count": 2048, 00:11:14.627 "buf_count": 2048 00:11:14.627 } 00:11:14.627 } 00:11:14.627 ] 00:11:14.627 }, 00:11:14.627 { 00:11:14.627 "subsystem": "bdev", 00:11:14.627 "config": [ 00:11:14.627 { 00:11:14.627 "method": "bdev_set_options", 00:11:14.627 "params": { 00:11:14.627 "bdev_io_pool_size": 65535, 00:11:14.627 "bdev_io_cache_size": 256, 00:11:14.627 "bdev_auto_examine": true, 00:11:14.627 "iobuf_small_cache_size": 128, 00:11:14.627 "iobuf_large_cache_size": 16 00:11:14.627 } 00:11:14.627 }, 00:11:14.627 { 00:11:14.627 "method": "bdev_raid_set_options", 00:11:14.627 "params": { 00:11:14.627 "process_window_size_kb": 1024 00:11:14.627 } 00:11:14.627 }, 00:11:14.627 { 00:11:14.627 "method": "bdev_iscsi_set_options", 00:11:14.627 "params": { 00:11:14.627 "timeout_sec": 30 00:11:14.627 } 00:11:14.627 }, 00:11:14.627 { 00:11:14.627 "method": "bdev_nvme_set_options", 00:11:14.627 "params": { 00:11:14.627 "action_on_timeout": "none", 00:11:14.627 "timeout_us": 0, 00:11:14.627 "timeout_admin_us": 0, 00:11:14.627 "keep_alive_timeout_ms": 10000, 00:11:14.627 "transport_retry_count": 4, 00:11:14.627 "arbitration_burst": 0, 00:11:14.627 "low_priority_weight": 0, 00:11:14.627 "medium_priority_weight": 0, 00:11:14.627 "high_priority_weight": 0, 00:11:14.627 "nvme_adminq_poll_period_us": 10000, 00:11:14.627 "nvme_ioq_poll_period_us": 0, 00:11:14.627 "io_queue_requests": 512, 00:11:14.627 "delay_cmd_submit": true, 00:11:14.627 "bdev_retry_count": 3, 00:11:14.627 "transport_ack_timeout": 0, 00:11:14.627 "ctrlr_loss_timeout_sec": 0, 00:11:14.627 "reconnect_delay_sec": 0, 00:11:14.627 "fast_io_fail_timeout_sec": 0, 00:11:14.627 "generate_uuids": false, 00:11:14.627 "transport_tos": 0, 00:11:14.627 "io_path_stat": false, 00:11:14.627 "allow_accel_sequence": false 00:11:14.627 } 00:11:14.627 }, 00:11:14.627 { 00:11:14.627 "method": "bdev_nvme_attach_controller", 00:11:14.627 "params": { 00:11:14.628 "name": "TLSTEST", 00:11:14.628 "trtype": "TCP", 00:11:14.628 "adrfam": "IPv4", 00:11:14.628 "traddr": "10.0.0.2", 00:11:14.628 "trsvcid": "4420", 00:11:14.628 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:14.628 "prchk_reftag": false, 00:11:14.628 "prchk_guard": false, 00:11:14.628 "ctrlr_loss_timeout_sec": 0, 00:11:14.628 "reconnect_delay_sec": 0, 00:11:14.628 "fast_io_fail_timeout_sec": 0, 00:11:14.628 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:14.628 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:14.628 "hdgst": false, 00:11:14.628 "ddgst": false 00:11:14.628 } 00:11:14.628 }, 00:11:14.628 { 00:11:14.628 "method": "bdev_nvme_set_hotplug", 00:11:14.628 "params": { 00:11:14.628 "period_us": 100000, 00:11:14.628 "enable": false 00:11:14.628 } 00:11:14.628 }, 00:11:14.628 { 00:11:14.628 "method": "bdev_wait_for_examine" 00:11:14.628 } 00:11:14.628 ] 00:11:14.628 }, 00:11:14.628 { 00:11:14.628 "subsystem": "nbd", 00:11:14.628 "config": [] 00:11:14.628 } 00:11:14.628 ] 00:11:14.628 }' 00:11:14.628 06:41:28 -- target/tls.sh@208 -- # killprocess 65437 00:11:14.628 06:41:28 -- common/autotest_common.sh@936 -- # '[' -z 65437 ']' 00:11:14.628 06:41:28 -- common/autotest_common.sh@940 -- # kill -0 65437 00:11:14.628 06:41:28 -- common/autotest_common.sh@941 -- # uname 00:11:14.887 06:41:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:14.887 06:41:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65437 00:11:14.887 killing process with pid 65437 00:11:14.887 Received shutdown signal, test time was about 10.000000 seconds 00:11:14.887 00:11:14.887 Latency(us) 00:11:14.887 [2024-12-14T06:41:28.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:14.887 [2024-12-14T06:41:28.879Z] =================================================================================================================== 00:11:14.887 [2024-12-14T06:41:28.879Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:14.887 06:41:28 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:14.887 06:41:28 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:14.887 06:41:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65437' 00:11:14.887 06:41:28 -- common/autotest_common.sh@955 -- # kill 65437 00:11:14.887 06:41:28 -- common/autotest_common.sh@960 -- # wait 65437 00:11:14.887 06:41:28 -- target/tls.sh@209 -- # killprocess 65382 00:11:14.887 06:41:28 -- common/autotest_common.sh@936 -- # '[' -z 65382 ']' 00:11:14.887 06:41:28 -- common/autotest_common.sh@940 -- # kill -0 65382 00:11:14.887 06:41:28 -- common/autotest_common.sh@941 -- # uname 00:11:14.887 06:41:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:14.887 06:41:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65382 00:11:14.887 killing process with pid 65382 00:11:14.887 06:41:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:14.887 06:41:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:14.887 06:41:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65382' 00:11:14.887 06:41:28 -- common/autotest_common.sh@955 -- # kill 65382 00:11:14.887 06:41:28 -- common/autotest_common.sh@960 -- # wait 65382 00:11:15.147 06:41:29 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:11:15.147 06:41:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:15.147 06:41:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:15.147 06:41:29 -- common/autotest_common.sh@10 -- # set +x 00:11:15.147 06:41:29 -- target/tls.sh@212 -- # echo '{ 00:11:15.147 "subsystems": [ 00:11:15.147 { 00:11:15.147 "subsystem": "iobuf", 00:11:15.147 "config": [ 00:11:15.147 { 00:11:15.147 "method": "iobuf_set_options", 00:11:15.147 "params": { 00:11:15.147 "small_pool_count": 8192, 00:11:15.147 "large_pool_count": 1024, 00:11:15.147 "small_bufsize": 8192, 00:11:15.147 "large_bufsize": 135168 00:11:15.147 } 00:11:15.147 } 00:11:15.147 ] 00:11:15.147 }, 00:11:15.147 { 00:11:15.147 "subsystem": "sock", 00:11:15.147 "config": [ 00:11:15.147 { 00:11:15.147 "method": "sock_impl_set_options", 00:11:15.147 "params": { 00:11:15.147 "impl_name": "uring", 00:11:15.147 "recv_buf_size": 2097152, 00:11:15.147 "send_buf_size": 2097152, 00:11:15.147 "enable_recv_pipe": true, 00:11:15.147 "enable_quickack": false, 00:11:15.147 "enable_placement_id": 0, 00:11:15.147 "enable_zerocopy_send_server": false, 00:11:15.147 "enable_zerocopy_send_client": false, 00:11:15.147 "zerocopy_threshold": 0, 00:11:15.147 "tls_version": 0, 00:11:15.147 "enable_ktls": false 00:11:15.147 } 00:11:15.147 }, 00:11:15.147 { 00:11:15.147 "method": "sock_impl_set_options", 00:11:15.147 "params": { 00:11:15.147 "impl_name": "posix", 00:11:15.147 "recv_buf_size": 2097152, 00:11:15.147 "send_buf_size": 2097152, 00:11:15.147 "enable_recv_pipe": true, 00:11:15.147 "enable_quickack": false, 00:11:15.147 "enable_placement_id": 0, 00:11:15.147 "enable_zerocopy_send_server": true, 00:11:15.147 "enable_zerocopy_send_client": false, 00:11:15.147 "zerocopy_threshold": 0, 00:11:15.147 "tls_version": 0, 00:11:15.147 "enable_ktls": false 00:11:15.147 } 00:11:15.147 }, 00:11:15.147 { 00:11:15.147 "method": "sock_impl_set_options", 00:11:15.147 "params": { 00:11:15.147 "impl_name": "ssl", 00:11:15.147 "recv_buf_size": 4096, 00:11:15.147 "send_buf_size": 4096, 00:11:15.147 "enable_recv_pipe": true, 00:11:15.147 "enable_quickack": false, 00:11:15.147 "enable_placement_id": 0, 00:11:15.147 "enable_zerocopy_send_server": true, 00:11:15.147 "enable_zerocopy_send_client": false, 00:11:15.147 "zerocopy_threshold": 0, 00:11:15.147 "tls_version": 0, 00:11:15.147 "enable_ktls": false 00:11:15.147 } 00:11:15.147 } 00:11:15.147 ] 00:11:15.147 }, 00:11:15.147 { 00:11:15.147 "subsystem": "vmd", 00:11:15.147 "config": [] 00:11:15.147 }, 00:11:15.147 { 00:11:15.147 "subsystem": "accel", 00:11:15.147 "config": [ 00:11:15.147 { 00:11:15.147 "method": "accel_set_options", 00:11:15.147 "params": { 00:11:15.147 "small_cache_size": 128, 00:11:15.147 "large_cache_size": 16, 00:11:15.147 "task_count": 2048, 00:11:15.147 "sequence_count": 2048, 00:11:15.147 "buf_count": 2048 00:11:15.147 } 00:11:15.147 } 00:11:15.147 ] 00:11:15.147 }, 00:11:15.147 { 00:11:15.147 "subsystem": "bdev", 00:11:15.147 "config": [ 00:11:15.147 { 00:11:15.147 "method": "bdev_set_options", 00:11:15.147 "params": { 00:11:15.147 "bdev_io_pool_size": 65535, 00:11:15.147 "bdev_io_cache_size": 256, 00:11:15.147 "bdev_auto_examine": true, 00:11:15.147 "iobuf_small_cache_size": 128, 00:11:15.147 "iobuf_large_cache_size": 16 00:11:15.147 } 00:11:15.147 }, 00:11:15.147 { 00:11:15.147 "method": "bdev_raid_set_options", 00:11:15.147 "params": { 00:11:15.147 "process_window_size_kb": 1024 00:11:15.147 } 00:11:15.147 }, 00:11:15.147 { 00:11:15.147 "method": "bdev_iscsi_set_options", 00:11:15.147 "params": { 00:11:15.147 "timeout_sec": 30 00:11:15.147 } 00:11:15.147 }, 00:11:15.147 { 00:11:15.147 "method": "bdev_nvme_set_options", 00:11:15.147 "params": { 00:11:15.147 "action_on_timeout": "none", 00:11:15.147 "timeout_us": 0, 00:11:15.147 "timeout_admin_us": 0, 00:11:15.147 "keep_alive_timeout_ms": 10000, 00:11:15.147 "transport_retry_count": 4, 00:11:15.147 "arbitration_burst": 0, 00:11:15.147 "low_priority_weight": 0, 00:11:15.147 "medium_priority_weight": 0, 00:11:15.147 "high_priority_weight": 0, 00:11:15.147 "nvme_adminq_poll_period_us": 10000, 00:11:15.147 "nvme_ioq_poll_period_us": 0, 00:11:15.147 "io_queue_requests": 0, 00:11:15.147 "delay_cmd_submit": true, 00:11:15.147 "bdev_retry_count": 3, 00:11:15.147 "transport_ack_timeout": 0, 00:11:15.147 "ctrlr_loss_timeout_sec": 0, 00:11:15.147 "reconnect_delay_sec": 0, 00:11:15.147 "fast_io_fail_timeout_sec": 0, 00:11:15.147 "generate_uuids": false, 00:11:15.147 "transport_tos": 0, 00:11:15.147 "io_path_stat": false, 00:11:15.147 "allow_accel_sequence": false 00:11:15.147 } 00:11:15.147 }, 00:11:15.147 { 00:11:15.147 "method": "bdev_nvme_set_hotplug", 00:11:15.147 "params": { 00:11:15.147 "period_us": 100000, 00:11:15.147 "enable": false 00:11:15.147 } 00:11:15.147 }, 00:11:15.147 { 00:11:15.147 "method": "bdev_malloc_create", 00:11:15.147 "params": { 00:11:15.147 "name": "malloc0", 00:11:15.147 "num_blocks": 8192, 00:11:15.147 "block_size": 4096, 00:11:15.147 "physical_block_size": 4096, 00:11:15.147 "uuid": "648df490-07e5-4b3d-bd0b-2124acd1d7f2", 00:11:15.147 "optimal_io_boundary": 0 00:11:15.147 } 00:11:15.147 }, 00:11:15.147 { 00:11:15.147 "method": "bdev_wait_for_examine" 00:11:15.147 } 00:11:15.147 ] 00:11:15.147 }, 00:11:15.147 { 00:11:15.147 "subsystem": "nbd", 00:11:15.147 "config": [] 00:11:15.147 }, 00:11:15.147 { 00:11:15.147 "subsystem": "scheduler", 00:11:15.147 "config": [ 00:11:15.147 { 00:11:15.147 "method": "framework_set_scheduler", 00:11:15.147 "params": { 00:11:15.147 "name": "static" 00:11:15.147 } 00:11:15.147 } 00:11:15.147 ] 00:11:15.147 }, 00:11:15.147 { 00:11:15.147 "subsystem": "nvmf", 00:11:15.147 "config": [ 00:11:15.147 { 00:11:15.147 "method": "nvmf_set_config", 00:11:15.147 "params": { 00:11:15.147 "discovery_filter": "match_any", 00:11:15.147 "admin_cmd_passthru": { 00:11:15.147 "identify_ctrlr": false 00:11:15.147 } 00:11:15.147 } 00:11:15.147 }, 00:11:15.147 { 00:11:15.147 "method": "nvmf_set_max_subsystems", 00:11:15.147 "params": { 00:11:15.147 "max_subsystems": 1024 00:11:15.147 } 00:11:15.147 }, 00:11:15.147 { 00:11:15.147 "method": "nvmf_set_crdt", 00:11:15.147 "params": { 00:11:15.147 "crdt1": 0, 00:11:15.147 "crdt2": 0, 00:11:15.147 "crdt3": 0 00:11:15.147 } 00:11:15.147 }, 00:11:15.147 { 00:11:15.147 "method": "nvmf_create_transport", 00:11:15.147 "params": { 00:11:15.147 "trtype": "TCP", 00:11:15.147 "max_queue_depth": 128, 00:11:15.147 "max_io_qpairs_per_ctrlr": 127, 00:11:15.147 "in_capsule_data_size": 4096, 00:11:15.147 "max_io_size": 131072, 00:11:15.147 "io_unit_size": 131072, 00:11:15.147 "max_aq_depth": 128, 00:11:15.147 "num_shared_buffers": 511, 00:11:15.147 "buf_cache_size": 4294967295, 00:11:15.147 "dif_insert_or_strip": false, 00:11:15.147 "zcopy": false, 00:11:15.147 "c2h_success": false, 00:11:15.147 "sock_priority": 0, 00:11:15.147 "abort_timeout_sec": 1 00:11:15.147 } 00:11:15.147 }, 00:11:15.147 { 00:11:15.147 "method": "nvmf_create_subsystem", 00:11:15.147 "params": { 00:11:15.148 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:15.148 "allow_any_host": false, 00:11:15.148 "serial_number": "SPDK00000000000001", 00:11:15.148 "model_number": "SPDK bdev Controller", 00:11:15.148 "max_namespaces": 10, 00:11:15.148 "min_cntlid": 1, 00:11:15.148 "max_cntlid": 65519, 00:11:15.148 "ana_reporting": false 00:11:15.148 } 00:11:15.148 }, 00:11:15.148 { 00:11:15.148 "method": "nvmf_subsystem_add_host", 00:11:15.148 "params": { 00:11:15.148 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:15.148 "host": "nqn.2016-06.io.spdk:host1", 00:11:15.148 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:11:15.148 } 00:11:15.148 }, 00:11:15.148 { 00:11:15.148 "method": "nvmf_subsystem_add_ns", 00:11:15.148 "params": { 00:11:15.148 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:15.148 "namespace": { 00:11:15.148 "nsid": 1, 00:11:15.148 "bdev_name": "malloc0", 00:11:15.148 "nguid": "648DF49007E54B3DBD0B2124ACD1D7F2", 00:11:15.148 "uuid": "648df490-07e5-4b3d-bd0b-2124acd1d7f2" 00:11:15.148 } 00:11:15.148 } 00:11:15.148 }, 00:11:15.148 { 00:11:15.148 "method": "nvmf_subsystem_add_listener", 00:11:15.148 "params": { 00:11:15.148 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:15.148 "listen_address": { 00:11:15.148 "trtype": "TCP", 00:11:15.148 "adrfam": "IPv4", 00:11:15.148 "traddr": "10.0.0.2", 00:11:15.148 "trsvcid": "4420" 00:11:15.148 }, 00:11:15.148 "secure_channel": true 00:11:15.148 } 00:11:15.148 } 00:11:15.148 ] 00:11:15.148 } 00:11:15.148 ] 00:11:15.148 }' 00:11:15.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.148 06:41:29 -- nvmf/common.sh@469 -- # nvmfpid=65480 00:11:15.148 06:41:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:11:15.148 06:41:29 -- nvmf/common.sh@470 -- # waitforlisten 65480 00:11:15.148 06:41:29 -- common/autotest_common.sh@829 -- # '[' -z 65480 ']' 00:11:15.148 06:41:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.148 06:41:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:15.148 06:41:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.148 06:41:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:15.148 06:41:29 -- common/autotest_common.sh@10 -- # set +x 00:11:15.148 [2024-12-14 06:41:29.108835] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:15.148 [2024-12-14 06:41:29.109170] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.407 [2024-12-14 06:41:29.246568] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.407 [2024-12-14 06:41:29.296281] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:15.407 [2024-12-14 06:41:29.296670] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.407 [2024-12-14 06:41:29.296690] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.407 [2024-12-14 06:41:29.296699] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.407 [2024-12-14 06:41:29.296726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.676 [2024-12-14 06:41:29.475570] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:15.676 [2024-12-14 06:41:29.507529] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:15.676 [2024-12-14 06:41:29.507742] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.273 06:41:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:16.273 06:41:30 -- common/autotest_common.sh@862 -- # return 0 00:11:16.273 06:41:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:16.273 06:41:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:16.274 06:41:30 -- common/autotest_common.sh@10 -- # set +x 00:11:16.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:16.274 06:41:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.274 06:41:30 -- target/tls.sh@216 -- # bdevperf_pid=65512 00:11:16.274 06:41:30 -- target/tls.sh@217 -- # waitforlisten 65512 /var/tmp/bdevperf.sock 00:11:16.274 06:41:30 -- common/autotest_common.sh@829 -- # '[' -z 65512 ']' 00:11:16.274 06:41:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:16.274 06:41:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:16.274 06:41:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:16.274 06:41:30 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:11:16.274 06:41:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:16.274 06:41:30 -- common/autotest_common.sh@10 -- # set +x 00:11:16.274 06:41:30 -- target/tls.sh@213 -- # echo '{ 00:11:16.274 "subsystems": [ 00:11:16.274 { 00:11:16.274 "subsystem": "iobuf", 00:11:16.274 "config": [ 00:11:16.274 { 00:11:16.274 "method": "iobuf_set_options", 00:11:16.274 "params": { 00:11:16.274 "small_pool_count": 8192, 00:11:16.274 "large_pool_count": 1024, 00:11:16.274 "small_bufsize": 8192, 00:11:16.274 "large_bufsize": 135168 00:11:16.274 } 00:11:16.274 } 00:11:16.274 ] 00:11:16.274 }, 00:11:16.274 { 00:11:16.274 "subsystem": "sock", 00:11:16.274 "config": [ 00:11:16.274 { 00:11:16.274 "method": "sock_impl_set_options", 00:11:16.274 "params": { 00:11:16.274 "impl_name": "uring", 00:11:16.274 "recv_buf_size": 2097152, 00:11:16.274 "send_buf_size": 2097152, 00:11:16.274 "enable_recv_pipe": true, 00:11:16.274 "enable_quickack": false, 00:11:16.274 "enable_placement_id": 0, 00:11:16.274 "enable_zerocopy_send_server": false, 00:11:16.274 "enable_zerocopy_send_client": false, 00:11:16.274 "zerocopy_threshold": 0, 00:11:16.274 "tls_version": 0, 00:11:16.274 "enable_ktls": false 00:11:16.274 } 00:11:16.274 }, 00:11:16.274 { 00:11:16.274 "method": "sock_impl_set_options", 00:11:16.274 "params": { 00:11:16.274 "impl_name": "posix", 00:11:16.274 "recv_buf_size": 2097152, 00:11:16.274 "send_buf_size": 2097152, 00:11:16.274 "enable_recv_pipe": true, 00:11:16.274 "enable_quickack": false, 00:11:16.274 "enable_placement_id": 0, 00:11:16.274 "enable_zerocopy_send_server": true, 00:11:16.274 "enable_zerocopy_send_client": false, 00:11:16.274 "zerocopy_threshold": 0, 00:11:16.274 "tls_version": 0, 00:11:16.274 "enable_ktls": false 00:11:16.274 } 00:11:16.274 }, 00:11:16.274 { 00:11:16.274 "method": "sock_impl_set_options", 00:11:16.274 "params": { 00:11:16.274 "impl_name": "ssl", 00:11:16.274 "recv_buf_size": 4096, 00:11:16.274 "send_buf_size": 4096, 00:11:16.274 "enable_recv_pipe": true, 00:11:16.274 "enable_quickack": false, 00:11:16.274 "enable_placement_id": 0, 00:11:16.274 "enable_zerocopy_send_server": true, 00:11:16.274 "enable_zerocopy_send_client": false, 00:11:16.274 "zerocopy_threshold": 0, 00:11:16.274 "tls_version": 0, 00:11:16.274 "enable_ktls": false 00:11:16.274 } 00:11:16.274 } 00:11:16.274 ] 00:11:16.274 }, 00:11:16.274 { 00:11:16.274 "subsystem": "vmd", 00:11:16.274 "config": [] 00:11:16.274 }, 00:11:16.274 { 00:11:16.274 "subsystem": "accel", 00:11:16.274 "config": [ 00:11:16.274 { 00:11:16.274 "method": "accel_set_options", 00:11:16.274 "params": { 00:11:16.274 "small_cache_size": 128, 00:11:16.274 "large_cache_size": 16, 00:11:16.274 "task_count": 2048, 00:11:16.274 "sequence_count": 2048, 00:11:16.274 "buf_count": 2048 00:11:16.274 } 00:11:16.274 } 00:11:16.274 ] 00:11:16.274 }, 00:11:16.274 { 00:11:16.274 "subsystem": "bdev", 00:11:16.274 "config": [ 00:11:16.274 { 00:11:16.274 "method": "bdev_set_options", 00:11:16.274 "params": { 00:11:16.274 "bdev_io_pool_size": 65535, 00:11:16.274 "bdev_io_cache_size": 256, 00:11:16.274 "bdev_auto_examine": true, 00:11:16.274 "iobuf_small_cache_size": 128, 00:11:16.274 "iobuf_large_cache_size": 16 00:11:16.274 } 00:11:16.274 }, 00:11:16.274 { 00:11:16.274 "method": "bdev_raid_set_options", 00:11:16.274 "params": { 00:11:16.274 "process_window_size_kb": 1024 00:11:16.274 } 00:11:16.274 }, 00:11:16.274 { 00:11:16.274 "method": "bdev_iscsi_set_options", 00:11:16.274 "params": { 00:11:16.274 "timeout_sec": 30 00:11:16.274 } 00:11:16.274 }, 00:11:16.274 { 00:11:16.274 "method": "bdev_nvme_set_options", 00:11:16.274 "params": { 00:11:16.274 "action_on_timeout": "none", 00:11:16.274 "timeout_us": 0, 00:11:16.274 "timeout_admin_us": 0, 00:11:16.274 "keep_alive_timeout_ms": 10000, 00:11:16.274 "transport_retry_count": 4, 00:11:16.274 "arbitration_burst": 0, 00:11:16.274 "low_priority_weight": 0, 00:11:16.274 "medium_priority_weight": 0, 00:11:16.274 "high_priority_weight": 0, 00:11:16.274 "nvme_adminq_poll_period_us": 10000, 00:11:16.274 "nvme_ioq_poll_period_us": 0, 00:11:16.274 "io_queue_requests": 512, 00:11:16.274 "delay_cmd_submit": true, 00:11:16.274 "bdev_retry_count": 3, 00:11:16.274 "transport_ack_timeout": 0, 00:11:16.274 "ctrlr_loss_timeout_sec": 0, 00:11:16.274 "reconnect_delay_sec": 0, 00:11:16.274 "fast_io_fail_timeout_sec": 0, 00:11:16.274 "generate_uuids": false, 00:11:16.274 "transport_tos": 0, 00:11:16.274 "io_path_stat": false, 00:11:16.274 "allow_accel_sequence": false 00:11:16.274 } 00:11:16.274 }, 00:11:16.274 { 00:11:16.274 "method": "bdev_nvme_attach_controller", 00:11:16.274 "params": { 00:11:16.274 "name": "TLSTEST", 00:11:16.274 "trtype": "TCP", 00:11:16.274 "adrfam": "IPv4", 00:11:16.274 "traddr": "10.0.0.2", 00:11:16.274 "trsvcid": "4420", 00:11:16.274 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:16.274 "prchk_reftag": false, 00:11:16.274 "prchk_guard": false, 00:11:16.274 "ctrlr_loss_timeout_sec": 0, 00:11:16.274 "reconnect_delay_sec": 0, 00:11:16.274 "fast_io_fail_timeout_sec": 0, 00:11:16.274 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:11:16.274 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:16.274 "hdgst": false, 00:11:16.274 "ddgst": false 00:11:16.274 } 00:11:16.274 }, 00:11:16.274 { 00:11:16.274 "method": "bdev_nvme_set_hotplug", 00:11:16.274 "params": { 00:11:16.274 "period_us": 100000, 00:11:16.274 "enable": false 00:11:16.274 } 00:11:16.274 }, 00:11:16.274 { 00:11:16.274 "method": "bdev_wait_for_examine" 00:11:16.274 } 00:11:16.274 ] 00:11:16.274 }, 00:11:16.274 { 00:11:16.274 "subsystem": "nbd", 00:11:16.274 "config": [] 00:11:16.274 } 00:11:16.274 ] 00:11:16.274 }' 00:11:16.274 [2024-12-14 06:41:30.121555] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:16.274 [2024-12-14 06:41:30.121845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65512 ] 00:11:16.274 [2024-12-14 06:41:30.262381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.533 [2024-12-14 06:41:30.332177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.533 [2024-12-14 06:41:30.462283] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:17.102 06:41:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:17.102 06:41:31 -- common/autotest_common.sh@862 -- # return 0 00:11:17.102 06:41:31 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:17.361 Running I/O for 10 seconds... 00:11:27.342 00:11:27.342 Latency(us) 00:11:27.342 [2024-12-14T06:41:41.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:27.342 [2024-12-14T06:41:41.334Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:27.342 Verification LBA range: start 0x0 length 0x2000 00:11:27.342 TLSTESTn1 : 10.01 6087.53 23.78 0.00 0.00 20994.26 5183.30 20494.89 00:11:27.342 [2024-12-14T06:41:41.334Z] =================================================================================================================== 00:11:27.342 [2024-12-14T06:41:41.334Z] Total : 6087.53 23.78 0.00 0.00 20994.26 5183.30 20494.89 00:11:27.342 0 00:11:27.342 06:41:41 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:27.342 06:41:41 -- target/tls.sh@223 -- # killprocess 65512 00:11:27.342 06:41:41 -- common/autotest_common.sh@936 -- # '[' -z 65512 ']' 00:11:27.342 06:41:41 -- common/autotest_common.sh@940 -- # kill -0 65512 00:11:27.342 06:41:41 -- common/autotest_common.sh@941 -- # uname 00:11:27.342 06:41:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:27.342 06:41:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65512 00:11:27.342 killing process with pid 65512 00:11:27.342 Received shutdown signal, test time was about 10.000000 seconds 00:11:27.342 00:11:27.342 Latency(us) 00:11:27.342 [2024-12-14T06:41:41.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:27.342 [2024-12-14T06:41:41.334Z] =================================================================================================================== 00:11:27.342 [2024-12-14T06:41:41.334Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:27.342 06:41:41 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:27.342 06:41:41 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:27.342 06:41:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65512' 00:11:27.342 06:41:41 -- common/autotest_common.sh@955 -- # kill 65512 00:11:27.342 06:41:41 -- common/autotest_common.sh@960 -- # wait 65512 00:11:27.601 06:41:41 -- target/tls.sh@224 -- # killprocess 65480 00:11:27.601 06:41:41 -- common/autotest_common.sh@936 -- # '[' -z 65480 ']' 00:11:27.601 06:41:41 -- common/autotest_common.sh@940 -- # kill -0 65480 00:11:27.601 06:41:41 -- common/autotest_common.sh@941 -- # uname 00:11:27.602 06:41:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:27.602 06:41:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65480 00:11:27.602 killing process with pid 65480 00:11:27.602 06:41:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:27.602 06:41:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:27.602 06:41:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65480' 00:11:27.602 06:41:41 -- common/autotest_common.sh@955 -- # kill 65480 00:11:27.602 06:41:41 -- common/autotest_common.sh@960 -- # wait 65480 00:11:27.861 06:41:41 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:11:27.861 06:41:41 -- target/tls.sh@227 -- # cleanup 00:11:27.861 06:41:41 -- target/tls.sh@15 -- # process_shm --id 0 00:11:27.861 06:41:41 -- common/autotest_common.sh@806 -- # type=--id 00:11:27.861 06:41:41 -- common/autotest_common.sh@807 -- # id=0 00:11:27.861 06:41:41 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:11:27.861 06:41:41 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:27.861 06:41:41 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:11:27.861 06:41:41 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:11:27.861 06:41:41 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:11:27.861 06:41:41 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:27.861 nvmf_trace.0 00:11:27.861 Process with pid 65512 is not found 00:11:27.861 06:41:41 -- common/autotest_common.sh@821 -- # return 0 00:11:27.861 06:41:41 -- target/tls.sh@16 -- # killprocess 65512 00:11:27.861 06:41:41 -- common/autotest_common.sh@936 -- # '[' -z 65512 ']' 00:11:27.861 06:41:41 -- common/autotest_common.sh@940 -- # kill -0 65512 00:11:27.861 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (65512) - No such process 00:11:27.861 06:41:41 -- common/autotest_common.sh@963 -- # echo 'Process with pid 65512 is not found' 00:11:27.861 06:41:41 -- target/tls.sh@17 -- # nvmftestfini 00:11:27.861 06:41:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:27.861 06:41:41 -- nvmf/common.sh@116 -- # sync 00:11:27.861 06:41:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:27.861 06:41:41 -- nvmf/common.sh@119 -- # set +e 00:11:27.861 06:41:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:27.861 06:41:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:27.861 rmmod nvme_tcp 00:11:27.861 rmmod nvme_fabrics 00:11:27.861 rmmod nvme_keyring 00:11:27.861 06:41:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:27.861 06:41:41 -- nvmf/common.sh@123 -- # set -e 00:11:27.861 06:41:41 -- nvmf/common.sh@124 -- # return 0 00:11:27.861 06:41:41 -- nvmf/common.sh@477 -- # '[' -n 65480 ']' 00:11:27.861 06:41:41 -- nvmf/common.sh@478 -- # killprocess 65480 00:11:27.861 06:41:41 -- common/autotest_common.sh@936 -- # '[' -z 65480 ']' 00:11:27.861 06:41:41 -- common/autotest_common.sh@940 -- # kill -0 65480 00:11:27.861 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (65480) - No such process 00:11:27.861 Process with pid 65480 is not found 00:11:27.861 06:41:41 -- common/autotest_common.sh@963 -- # echo 'Process with pid 65480 is not found' 00:11:27.861 06:41:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:27.861 06:41:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:27.861 06:41:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:27.861 06:41:41 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:27.861 06:41:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:27.861 06:41:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.861 06:41:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:27.861 06:41:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.121 06:41:41 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:28.121 06:41:41 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:28.121 ************************************ 00:11:28.121 END TEST nvmf_tls 00:11:28.121 ************************************ 00:11:28.121 00:11:28.121 real 1m9.792s 00:11:28.121 user 1m47.995s 00:11:28.121 sys 0m23.325s 00:11:28.121 06:41:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:28.121 06:41:41 -- common/autotest_common.sh@10 -- # set +x 00:11:28.121 06:41:41 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:11:28.121 06:41:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:28.121 06:41:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:28.121 06:41:41 -- common/autotest_common.sh@10 -- # set +x 00:11:28.121 ************************************ 00:11:28.121 START TEST nvmf_fips 00:11:28.121 ************************************ 00:11:28.121 06:41:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:11:28.121 * Looking for test storage... 00:11:28.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:11:28.121 06:41:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:28.121 06:41:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:28.121 06:41:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:28.121 06:41:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:28.121 06:41:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:28.121 06:41:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:28.121 06:41:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:28.121 06:41:42 -- scripts/common.sh@335 -- # IFS=.-: 00:11:28.121 06:41:42 -- scripts/common.sh@335 -- # read -ra ver1 00:11:28.121 06:41:42 -- scripts/common.sh@336 -- # IFS=.-: 00:11:28.121 06:41:42 -- scripts/common.sh@336 -- # read -ra ver2 00:11:28.121 06:41:42 -- scripts/common.sh@337 -- # local 'op=<' 00:11:28.121 06:41:42 -- scripts/common.sh@339 -- # ver1_l=2 00:11:28.121 06:41:42 -- scripts/common.sh@340 -- # ver2_l=1 00:11:28.121 06:41:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:28.121 06:41:42 -- scripts/common.sh@343 -- # case "$op" in 00:11:28.121 06:41:42 -- scripts/common.sh@344 -- # : 1 00:11:28.121 06:41:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:28.121 06:41:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.121 06:41:42 -- scripts/common.sh@364 -- # decimal 1 00:11:28.121 06:41:42 -- scripts/common.sh@352 -- # local d=1 00:11:28.121 06:41:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:28.121 06:41:42 -- scripts/common.sh@354 -- # echo 1 00:11:28.121 06:41:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:28.121 06:41:42 -- scripts/common.sh@365 -- # decimal 2 00:11:28.121 06:41:42 -- scripts/common.sh@352 -- # local d=2 00:11:28.121 06:41:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:28.121 06:41:42 -- scripts/common.sh@354 -- # echo 2 00:11:28.121 06:41:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:28.121 06:41:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:28.121 06:41:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:28.121 06:41:42 -- scripts/common.sh@367 -- # return 0 00:11:28.121 06:41:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:28.121 06:41:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:28.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.121 --rc genhtml_branch_coverage=1 00:11:28.121 --rc genhtml_function_coverage=1 00:11:28.121 --rc genhtml_legend=1 00:11:28.121 --rc geninfo_all_blocks=1 00:11:28.121 --rc geninfo_unexecuted_blocks=1 00:11:28.121 00:11:28.121 ' 00:11:28.121 06:41:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:28.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.122 --rc genhtml_branch_coverage=1 00:11:28.122 --rc genhtml_function_coverage=1 00:11:28.122 --rc genhtml_legend=1 00:11:28.122 --rc geninfo_all_blocks=1 00:11:28.122 --rc geninfo_unexecuted_blocks=1 00:11:28.122 00:11:28.122 ' 00:11:28.122 06:41:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:28.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.122 --rc genhtml_branch_coverage=1 00:11:28.122 --rc genhtml_function_coverage=1 00:11:28.122 --rc genhtml_legend=1 00:11:28.122 --rc geninfo_all_blocks=1 00:11:28.122 --rc geninfo_unexecuted_blocks=1 00:11:28.122 00:11:28.122 ' 00:11:28.122 06:41:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:28.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.122 --rc genhtml_branch_coverage=1 00:11:28.122 --rc genhtml_function_coverage=1 00:11:28.122 --rc genhtml_legend=1 00:11:28.122 --rc geninfo_all_blocks=1 00:11:28.122 --rc geninfo_unexecuted_blocks=1 00:11:28.122 00:11:28.122 ' 00:11:28.122 06:41:42 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:28.122 06:41:42 -- nvmf/common.sh@7 -- # uname -s 00:11:28.122 06:41:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.122 06:41:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.122 06:41:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.122 06:41:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.122 06:41:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.122 06:41:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.122 06:41:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.122 06:41:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.122 06:41:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.122 06:41:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.122 06:41:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 00:11:28.122 06:41:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=1897a557-42a7-4044-982a-fbab8b2b3e32 00:11:28.122 06:41:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.122 06:41:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.122 06:41:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:28.122 06:41:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:28.122 06:41:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.122 06:41:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.122 06:41:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.122 06:41:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.122 06:41:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.122 06:41:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.122 06:41:42 -- paths/export.sh@5 -- # export PATH 00:11:28.122 06:41:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.122 06:41:42 -- nvmf/common.sh@46 -- # : 0 00:11:28.122 06:41:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:28.122 06:41:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:28.122 06:41:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:28.122 06:41:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.122 06:41:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.122 06:41:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:28.122 06:41:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:28.122 06:41:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:28.122 06:41:42 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:28.122 06:41:42 -- fips/fips.sh@89 -- # check_openssl_version 00:11:28.122 06:41:42 -- fips/fips.sh@83 -- # local target=3.0.0 00:11:28.382 06:41:42 -- fips/fips.sh@85 -- # awk '{print $2}' 00:11:28.382 06:41:42 -- fips/fips.sh@85 -- # openssl version 00:11:28.382 06:41:42 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:11:28.382 06:41:42 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:11:28.382 06:41:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:28.382 06:41:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:28.382 06:41:42 -- scripts/common.sh@335 -- # IFS=.-: 00:11:28.382 06:41:42 -- scripts/common.sh@335 -- # read -ra ver1 00:11:28.382 06:41:42 -- scripts/common.sh@336 -- # IFS=.-: 00:11:28.382 06:41:42 -- scripts/common.sh@336 -- # read -ra ver2 00:11:28.382 06:41:42 -- scripts/common.sh@337 -- # local 'op=>=' 00:11:28.382 06:41:42 -- scripts/common.sh@339 -- # ver1_l=3 00:11:28.382 06:41:42 -- scripts/common.sh@340 -- # ver2_l=3 00:11:28.382 06:41:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:28.382 06:41:42 -- scripts/common.sh@343 -- # case "$op" in 00:11:28.382 06:41:42 -- scripts/common.sh@347 -- # : 1 00:11:28.382 06:41:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:28.382 06:41:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.382 06:41:42 -- scripts/common.sh@364 -- # decimal 3 00:11:28.382 06:41:42 -- scripts/common.sh@352 -- # local d=3 00:11:28.382 06:41:42 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:11:28.382 06:41:42 -- scripts/common.sh@354 -- # echo 3 00:11:28.382 06:41:42 -- scripts/common.sh@364 -- # ver1[v]=3 00:11:28.382 06:41:42 -- scripts/common.sh@365 -- # decimal 3 00:11:28.382 06:41:42 -- scripts/common.sh@352 -- # local d=3 00:11:28.382 06:41:42 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:11:28.382 06:41:42 -- scripts/common.sh@354 -- # echo 3 00:11:28.382 06:41:42 -- scripts/common.sh@365 -- # ver2[v]=3 00:11:28.382 06:41:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:28.382 06:41:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:28.382 06:41:42 -- scripts/common.sh@363 -- # (( v++ )) 00:11:28.382 06:41:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.382 06:41:42 -- scripts/common.sh@364 -- # decimal 1 00:11:28.382 06:41:42 -- scripts/common.sh@352 -- # local d=1 00:11:28.382 06:41:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:28.382 06:41:42 -- scripts/common.sh@354 -- # echo 1 00:11:28.382 06:41:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:28.382 06:41:42 -- scripts/common.sh@365 -- # decimal 0 00:11:28.382 06:41:42 -- scripts/common.sh@352 -- # local d=0 00:11:28.382 06:41:42 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:11:28.382 06:41:42 -- scripts/common.sh@354 -- # echo 0 00:11:28.382 06:41:42 -- scripts/common.sh@365 -- # ver2[v]=0 00:11:28.382 06:41:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:28.382 06:41:42 -- scripts/common.sh@366 -- # return 0 00:11:28.382 06:41:42 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:11:28.382 06:41:42 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:11:28.382 06:41:42 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:11:28.382 06:41:42 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:11:28.382 06:41:42 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:11:28.382 06:41:42 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:11:28.382 06:41:42 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:11:28.382 06:41:42 -- fips/fips.sh@113 -- # build_openssl_config 00:11:28.382 06:41:42 -- fips/fips.sh@37 -- # cat 00:11:28.382 06:41:42 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:11:28.382 06:41:42 -- fips/fips.sh@58 -- # cat - 00:11:28.382 06:41:42 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:11:28.382 06:41:42 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:11:28.382 06:41:42 -- fips/fips.sh@116 -- # mapfile -t providers 00:11:28.382 06:41:42 -- fips/fips.sh@116 -- # openssl list -providers 00:11:28.382 06:41:42 -- fips/fips.sh@116 -- # grep name 00:11:28.382 06:41:42 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:11:28.382 06:41:42 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:11:28.382 06:41:42 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:11:28.382 06:41:42 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:11:28.382 06:41:42 -- fips/fips.sh@127 -- # : 00:11:28.382 06:41:42 -- common/autotest_common.sh@650 -- # local es=0 00:11:28.382 06:41:42 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:11:28.382 06:41:42 -- common/autotest_common.sh@638 -- # local arg=openssl 00:11:28.382 06:41:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:28.382 06:41:42 -- common/autotest_common.sh@642 -- # type -t openssl 00:11:28.382 06:41:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:28.382 06:41:42 -- common/autotest_common.sh@644 -- # type -P openssl 00:11:28.382 06:41:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:28.383 06:41:42 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:11:28.383 06:41:42 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:11:28.383 06:41:42 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:11:28.383 Error setting digest 00:11:28.383 4042C434B97F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:11:28.383 4042C434B97F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:11:28.383 06:41:42 -- common/autotest_common.sh@653 -- # es=1 00:11:28.383 06:41:42 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:28.383 06:41:42 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:28.383 06:41:42 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:28.383 06:41:42 -- fips/fips.sh@130 -- # nvmftestinit 00:11:28.383 06:41:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:28.383 06:41:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.383 06:41:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:28.383 06:41:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:28.383 06:41:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:28.383 06:41:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.383 06:41:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:28.383 06:41:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.383 06:41:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:28.383 06:41:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:28.383 06:41:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:28.383 06:41:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:28.383 06:41:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:28.383 06:41:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:28.383 06:41:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.383 06:41:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:28.383 06:41:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:28.383 06:41:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:28.383 06:41:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:28.383 06:41:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:28.383 06:41:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:28.383 06:41:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.383 06:41:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:28.383 06:41:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:28.383 06:41:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:28.383 06:41:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:28.383 06:41:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:28.383 06:41:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:28.383 Cannot find device "nvmf_tgt_br" 00:11:28.383 06:41:42 -- nvmf/common.sh@154 -- # true 00:11:28.383 06:41:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:28.383 Cannot find device "nvmf_tgt_br2" 00:11:28.383 06:41:42 -- nvmf/common.sh@155 -- # true 00:11:28.383 06:41:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:28.383 06:41:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:28.383 Cannot find device "nvmf_tgt_br" 00:11:28.383 06:41:42 -- nvmf/common.sh@157 -- # true 00:11:28.383 06:41:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:28.383 Cannot find device "nvmf_tgt_br2" 00:11:28.383 06:41:42 -- nvmf/common.sh@158 -- # true 00:11:28.383 06:41:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:28.642 06:41:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:28.642 06:41:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:28.642 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:28.642 06:41:42 -- nvmf/common.sh@161 -- # true 00:11:28.642 06:41:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:28.642 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:28.642 06:41:42 -- nvmf/common.sh@162 -- # true 00:11:28.642 06:41:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:28.642 06:41:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:28.642 06:41:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:28.642 06:41:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:28.642 06:41:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:28.642 06:41:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:28.642 06:41:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:28.642 06:41:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:28.642 06:41:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:28.642 06:41:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:28.642 06:41:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:28.642 06:41:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:28.642 06:41:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:28.642 06:41:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:28.642 06:41:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:28.642 06:41:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:28.642 06:41:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:28.642 06:41:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:28.642 06:41:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:28.642 06:41:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:28.642 06:41:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:28.642 06:41:42 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:28.642 06:41:42 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:28.642 06:41:42 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:28.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:11:28.642 00:11:28.642 --- 10.0.0.2 ping statistics --- 00:11:28.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.642 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:11:28.642 06:41:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:28.642 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:28.642 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:11:28.643 00:11:28.643 --- 10.0.0.3 ping statistics --- 00:11:28.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.643 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:11:28.643 06:41:42 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:28.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:11:28.643 00:11:28.643 --- 10.0.0.1 ping statistics --- 00:11:28.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.643 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:11:28.643 06:41:42 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.643 06:41:42 -- nvmf/common.sh@421 -- # return 0 00:11:28.643 06:41:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:28.643 06:41:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.643 06:41:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:28.643 06:41:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:28.643 06:41:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.643 06:41:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:28.643 06:41:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:28.643 06:41:42 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:11:28.643 06:41:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:28.643 06:41:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:28.643 06:41:42 -- common/autotest_common.sh@10 -- # set +x 00:11:28.643 06:41:42 -- nvmf/common.sh@469 -- # nvmfpid=65871 00:11:28.643 06:41:42 -- nvmf/common.sh@470 -- # waitforlisten 65871 00:11:28.643 06:41:42 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:28.643 06:41:42 -- common/autotest_common.sh@829 -- # '[' -z 65871 ']' 00:11:28.643 06:41:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.643 06:41:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:28.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.643 06:41:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.643 06:41:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:28.643 06:41:42 -- common/autotest_common.sh@10 -- # set +x 00:11:28.902 [2024-12-14 06:41:42.714225] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:28.902 [2024-12-14 06:41:42.714327] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.902 [2024-12-14 06:41:42.855209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.160 [2024-12-14 06:41:42.915291] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:29.160 [2024-12-14 06:41:42.915865] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:29.160 [2024-12-14 06:41:42.916007] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:29.160 [2024-12-14 06:41:42.916091] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:29.160 [2024-12-14 06:41:42.916244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.097 06:41:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:30.097 06:41:43 -- common/autotest_common.sh@862 -- # return 0 00:11:30.097 06:41:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:30.097 06:41:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:30.097 06:41:43 -- common/autotest_common.sh@10 -- # set +x 00:11:30.097 06:41:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:30.097 06:41:43 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:11:30.097 06:41:43 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:11:30.097 06:41:43 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:30.097 06:41:43 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:11:30.097 06:41:43 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:30.097 06:41:43 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:30.097 06:41:43 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:30.097 06:41:43 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:30.097 [2024-12-14 06:41:43.966941] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:30.097 [2024-12-14 06:41:43.982910] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:30.097 [2024-12-14 06:41:43.983131] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:30.097 malloc0 00:11:30.097 06:41:44 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:30.097 06:41:44 -- fips/fips.sh@147 -- # bdevperf_pid=65911 00:11:30.097 06:41:44 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:30.097 06:41:44 -- fips/fips.sh@148 -- # waitforlisten 65911 /var/tmp/bdevperf.sock 00:11:30.097 06:41:44 -- common/autotest_common.sh@829 -- # '[' -z 65911 ']' 00:11:30.097 06:41:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:30.097 06:41:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:30.097 06:41:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:30.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:30.097 06:41:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:30.097 06:41:44 -- common/autotest_common.sh@10 -- # set +x 00:11:30.356 [2024-12-14 06:41:44.116666] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:30.356 [2024-12-14 06:41:44.116946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65911 ] 00:11:30.356 [2024-12-14 06:41:44.258049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.356 [2024-12-14 06:41:44.328315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:31.294 06:41:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:31.294 06:41:45 -- common/autotest_common.sh@862 -- # return 0 00:11:31.294 06:41:45 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:31.576 [2024-12-14 06:41:45.374889] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:31.576 TLSTESTn1 00:11:31.576 06:41:45 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:31.843 Running I/O for 10 seconds... 00:11:41.822 00:11:41.822 Latency(us) 00:11:41.822 [2024-12-14T06:41:55.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:41.822 [2024-12-14T06:41:55.814Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:41.822 Verification LBA range: start 0x0 length 0x2000 00:11:41.822 TLSTESTn1 : 10.01 5653.26 22.08 0.00 0.00 22606.71 5123.72 265003.75 00:11:41.822 [2024-12-14T06:41:55.814Z] =================================================================================================================== 00:11:41.822 [2024-12-14T06:41:55.814Z] Total : 5653.26 22.08 0.00 0.00 22606.71 5123.72 265003.75 00:11:41.822 0 00:11:41.822 06:41:55 -- fips/fips.sh@1 -- # cleanup 00:11:41.822 06:41:55 -- fips/fips.sh@15 -- # process_shm --id 0 00:11:41.822 06:41:55 -- common/autotest_common.sh@806 -- # type=--id 00:11:41.822 06:41:55 -- common/autotest_common.sh@807 -- # id=0 00:11:41.822 06:41:55 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:11:41.822 06:41:55 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:41.822 06:41:55 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:11:41.822 06:41:55 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:11:41.822 06:41:55 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:11:41.822 06:41:55 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:41.822 nvmf_trace.0 00:11:41.822 06:41:55 -- common/autotest_common.sh@821 -- # return 0 00:11:41.822 06:41:55 -- fips/fips.sh@16 -- # killprocess 65911 00:11:41.822 06:41:55 -- common/autotest_common.sh@936 -- # '[' -z 65911 ']' 00:11:41.822 06:41:55 -- common/autotest_common.sh@940 -- # kill -0 65911 00:11:41.822 06:41:55 -- common/autotest_common.sh@941 -- # uname 00:11:41.822 06:41:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:41.822 06:41:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65911 00:11:41.822 killing process with pid 65911 00:11:41.822 Received shutdown signal, test time was about 10.000000 seconds 00:11:41.822 00:11:41.822 Latency(us) 00:11:41.822 [2024-12-14T06:41:55.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:41.822 [2024-12-14T06:41:55.814Z] =================================================================================================================== 00:11:41.822 [2024-12-14T06:41:55.814Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:41.822 06:41:55 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:41.822 06:41:55 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:41.822 06:41:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65911' 00:11:41.822 06:41:55 -- common/autotest_common.sh@955 -- # kill 65911 00:11:41.822 06:41:55 -- common/autotest_common.sh@960 -- # wait 65911 00:11:42.081 06:41:55 -- fips/fips.sh@17 -- # nvmftestfini 00:11:42.081 06:41:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:42.081 06:41:55 -- nvmf/common.sh@116 -- # sync 00:11:42.082 06:41:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:42.082 06:41:55 -- nvmf/common.sh@119 -- # set +e 00:11:42.082 06:41:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:42.082 06:41:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:42.082 rmmod nvme_tcp 00:11:42.082 rmmod nvme_fabrics 00:11:42.082 rmmod nvme_keyring 00:11:42.082 06:41:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:42.082 06:41:56 -- nvmf/common.sh@123 -- # set -e 00:11:42.082 06:41:56 -- nvmf/common.sh@124 -- # return 0 00:11:42.082 06:41:56 -- nvmf/common.sh@477 -- # '[' -n 65871 ']' 00:11:42.082 06:41:56 -- nvmf/common.sh@478 -- # killprocess 65871 00:11:42.082 06:41:56 -- common/autotest_common.sh@936 -- # '[' -z 65871 ']' 00:11:42.082 06:41:56 -- common/autotest_common.sh@940 -- # kill -0 65871 00:11:42.082 06:41:56 -- common/autotest_common.sh@941 -- # uname 00:11:42.082 06:41:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:42.082 06:41:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65871 00:11:42.082 killing process with pid 65871 00:11:42.082 06:41:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:42.082 06:41:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:42.082 06:41:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65871' 00:11:42.082 06:41:56 -- common/autotest_common.sh@955 -- # kill 65871 00:11:42.082 06:41:56 -- common/autotest_common.sh@960 -- # wait 65871 00:11:42.341 06:41:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:42.341 06:41:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:42.341 06:41:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:42.341 06:41:56 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:42.341 06:41:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:42.341 06:41:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.341 06:41:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:42.341 06:41:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.341 06:41:56 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:42.341 06:41:56 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:11:42.341 ************************************ 00:11:42.341 END TEST nvmf_fips 00:11:42.341 ************************************ 00:11:42.341 00:11:42.341 real 0m14.337s 00:11:42.341 user 0m19.843s 00:11:42.341 sys 0m5.542s 00:11:42.341 06:41:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:42.341 06:41:56 -- common/autotest_common.sh@10 -- # set +x 00:11:42.341 06:41:56 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:11:42.341 06:41:56 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:11:42.341 06:41:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:42.341 06:41:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:42.341 06:41:56 -- common/autotest_common.sh@10 -- # set +x 00:11:42.341 ************************************ 00:11:42.341 START TEST nvmf_fuzz 00:11:42.341 ************************************ 00:11:42.341 06:41:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:11:42.601 * Looking for test storage... 00:11:42.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:42.601 06:41:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:42.601 06:41:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:42.601 06:41:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:42.601 06:41:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:42.601 06:41:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:42.601 06:41:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:42.601 06:41:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:42.601 06:41:56 -- scripts/common.sh@335 -- # IFS=.-: 00:11:42.601 06:41:56 -- scripts/common.sh@335 -- # read -ra ver1 00:11:42.601 06:41:56 -- scripts/common.sh@336 -- # IFS=.-: 00:11:42.601 06:41:56 -- scripts/common.sh@336 -- # read -ra ver2 00:11:42.601 06:41:56 -- scripts/common.sh@337 -- # local 'op=<' 00:11:42.601 06:41:56 -- scripts/common.sh@339 -- # ver1_l=2 00:11:42.601 06:41:56 -- scripts/common.sh@340 -- # ver2_l=1 00:11:42.601 06:41:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:42.601 06:41:56 -- scripts/common.sh@343 -- # case "$op" in 00:11:42.601 06:41:56 -- scripts/common.sh@344 -- # : 1 00:11:42.601 06:41:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:42.601 06:41:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:42.601 06:41:56 -- scripts/common.sh@364 -- # decimal 1 00:11:42.601 06:41:56 -- scripts/common.sh@352 -- # local d=1 00:11:42.601 06:41:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:42.601 06:41:56 -- scripts/common.sh@354 -- # echo 1 00:11:42.601 06:41:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:42.601 06:41:56 -- scripts/common.sh@365 -- # decimal 2 00:11:42.601 06:41:56 -- scripts/common.sh@352 -- # local d=2 00:11:42.601 06:41:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:42.601 06:41:56 -- scripts/common.sh@354 -- # echo 2 00:11:42.601 06:41:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:42.601 06:41:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:42.601 06:41:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:42.601 06:41:56 -- scripts/common.sh@367 -- # return 0 00:11:42.601 06:41:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:42.601 06:41:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:42.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.601 --rc genhtml_branch_coverage=1 00:11:42.601 --rc genhtml_function_coverage=1 00:11:42.601 --rc genhtml_legend=1 00:11:42.601 --rc geninfo_all_blocks=1 00:11:42.601 --rc geninfo_unexecuted_blocks=1 00:11:42.601 00:11:42.601 ' 00:11:42.601 06:41:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:42.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.601 --rc genhtml_branch_coverage=1 00:11:42.601 --rc genhtml_function_coverage=1 00:11:42.601 --rc genhtml_legend=1 00:11:42.601 --rc geninfo_all_blocks=1 00:11:42.601 --rc geninfo_unexecuted_blocks=1 00:11:42.601 00:11:42.601 ' 00:11:42.601 06:41:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:42.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.601 --rc genhtml_branch_coverage=1 00:11:42.601 --rc genhtml_function_coverage=1 00:11:42.601 --rc genhtml_legend=1 00:11:42.601 --rc geninfo_all_blocks=1 00:11:42.601 --rc geninfo_unexecuted_blocks=1 00:11:42.601 00:11:42.601 ' 00:11:42.601 06:41:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:42.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:42.601 --rc genhtml_branch_coverage=1 00:11:42.601 --rc genhtml_function_coverage=1 00:11:42.601 --rc genhtml_legend=1 00:11:42.601 --rc geninfo_all_blocks=1 00:11:42.601 --rc geninfo_unexecuted_blocks=1 00:11:42.601 00:11:42.601 ' 00:11:42.601 06:41:56 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:42.601 06:41:56 -- nvmf/common.sh@7 -- # uname -s 00:11:42.601 06:41:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.601 06:41:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.601 06:41:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.601 06:41:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.601 06:41:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.601 06:41:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.601 06:41:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.601 06:41:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.601 06:41:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.601 06:41:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.601 06:41:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 00:11:42.601 06:41:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=1897a557-42a7-4044-982a-fbab8b2b3e32 00:11:42.601 06:41:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.601 06:41:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.601 06:41:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:42.601 06:41:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:42.601 06:41:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.601 06:41:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.602 06:41:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.602 06:41:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.602 06:41:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.602 06:41:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.602 06:41:56 -- paths/export.sh@5 -- # export PATH 00:11:42.602 06:41:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.602 06:41:56 -- nvmf/common.sh@46 -- # : 0 00:11:42.602 06:41:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:42.602 06:41:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:42.602 06:41:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:42.602 06:41:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.602 06:41:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.602 06:41:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:42.602 06:41:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:42.602 06:41:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:42.602 06:41:56 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:11:42.602 06:41:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:42.602 06:41:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.602 06:41:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:42.602 06:41:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:42.602 06:41:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:42.602 06:41:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.602 06:41:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:42.602 06:41:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.602 06:41:56 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:42.602 06:41:56 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:42.602 06:41:56 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:42.602 06:41:56 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:42.602 06:41:56 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:42.602 06:41:56 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:42.602 06:41:56 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.602 06:41:56 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:42.602 06:41:56 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:42.602 06:41:56 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:42.602 06:41:56 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:42.602 06:41:56 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:42.602 06:41:56 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:42.602 06:41:56 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.602 06:41:56 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:42.602 06:41:56 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:42.602 06:41:56 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:42.602 06:41:56 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:42.602 06:41:56 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:42.602 06:41:56 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:42.602 Cannot find device "nvmf_tgt_br" 00:11:42.602 06:41:56 -- nvmf/common.sh@154 -- # true 00:11:42.602 06:41:56 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:42.602 Cannot find device "nvmf_tgt_br2" 00:11:42.602 06:41:56 -- nvmf/common.sh@155 -- # true 00:11:42.602 06:41:56 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:42.602 06:41:56 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:42.861 Cannot find device "nvmf_tgt_br" 00:11:42.861 06:41:56 -- nvmf/common.sh@157 -- # true 00:11:42.861 06:41:56 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:42.861 Cannot find device "nvmf_tgt_br2" 00:11:42.861 06:41:56 -- nvmf/common.sh@158 -- # true 00:11:42.861 06:41:56 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:42.861 06:41:56 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:42.861 06:41:56 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:42.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:42.861 06:41:56 -- nvmf/common.sh@161 -- # true 00:11:42.861 06:41:56 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:42.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:42.861 06:41:56 -- nvmf/common.sh@162 -- # true 00:11:42.861 06:41:56 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:42.861 06:41:56 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:42.861 06:41:56 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:42.861 06:41:56 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:42.861 06:41:56 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:42.861 06:41:56 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:42.861 06:41:56 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:42.862 06:41:56 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:42.862 06:41:56 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:42.862 06:41:56 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:42.862 06:41:56 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:42.862 06:41:56 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:42.862 06:41:56 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:42.862 06:41:56 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:42.862 06:41:56 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:42.862 06:41:56 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:42.862 06:41:56 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:42.862 06:41:56 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:42.862 06:41:56 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:42.862 06:41:56 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:42.862 06:41:56 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:42.862 06:41:56 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:42.862 06:41:56 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:42.862 06:41:56 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:42.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:42.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:11:42.862 00:11:42.862 --- 10.0.0.2 ping statistics --- 00:11:42.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.862 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:11:42.862 06:41:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:42.862 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:42.862 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:11:42.862 00:11:42.862 --- 10.0.0.3 ping statistics --- 00:11:42.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.862 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:11:42.862 06:41:56 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:42.862 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:42.862 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:11:42.862 00:11:42.862 --- 10.0.0.1 ping statistics --- 00:11:42.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.862 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:11:42.862 06:41:56 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:42.862 06:41:56 -- nvmf/common.sh@421 -- # return 0 00:11:42.862 06:41:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:42.862 06:41:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:42.862 06:41:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:42.862 06:41:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:42.862 06:41:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:42.862 06:41:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:42.862 06:41:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:43.121 06:41:56 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=66247 00:11:43.121 06:41:56 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:43.121 06:41:56 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:43.121 06:41:56 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 66247 00:11:43.121 06:41:56 -- common/autotest_common.sh@829 -- # '[' -z 66247 ']' 00:11:43.121 06:41:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.121 06:41:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:43.121 06:41:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.121 06:41:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:43.121 06:41:56 -- common/autotest_common.sh@10 -- # set +x 00:11:44.057 06:41:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:44.057 06:41:57 -- common/autotest_common.sh@862 -- # return 0 00:11:44.057 06:41:57 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:44.057 06:41:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.057 06:41:57 -- common/autotest_common.sh@10 -- # set +x 00:11:44.057 06:41:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.057 06:41:57 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:11:44.057 06:41:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.057 06:41:57 -- common/autotest_common.sh@10 -- # set +x 00:11:44.057 Malloc0 00:11:44.057 06:41:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.057 06:41:57 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:44.057 06:41:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.057 06:41:57 -- common/autotest_common.sh@10 -- # set +x 00:11:44.057 06:41:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.057 06:41:57 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:44.057 06:41:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.057 06:41:57 -- common/autotest_common.sh@10 -- # set +x 00:11:44.057 06:41:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.057 06:41:57 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.057 06:41:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.057 06:41:57 -- common/autotest_common.sh@10 -- # set +x 00:11:44.057 06:41:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.057 06:41:58 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:11:44.057 06:41:58 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:11:44.625 Shutting down the fuzz application 00:11:44.625 06:41:58 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:11:44.884 Shutting down the fuzz application 00:11:44.884 06:41:58 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:44.884 06:41:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.884 06:41:58 -- common/autotest_common.sh@10 -- # set +x 00:11:44.884 06:41:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.884 06:41:58 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:44.884 06:41:58 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:11:44.884 06:41:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:44.884 06:41:58 -- nvmf/common.sh@116 -- # sync 00:11:44.884 06:41:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:44.884 06:41:58 -- nvmf/common.sh@119 -- # set +e 00:11:44.884 06:41:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:44.884 06:41:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:44.884 rmmod nvme_tcp 00:11:44.884 rmmod nvme_fabrics 00:11:44.884 rmmod nvme_keyring 00:11:44.884 06:41:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:44.884 06:41:58 -- nvmf/common.sh@123 -- # set -e 00:11:44.884 06:41:58 -- nvmf/common.sh@124 -- # return 0 00:11:44.884 06:41:58 -- nvmf/common.sh@477 -- # '[' -n 66247 ']' 00:11:44.884 06:41:58 -- nvmf/common.sh@478 -- # killprocess 66247 00:11:44.884 06:41:58 -- common/autotest_common.sh@936 -- # '[' -z 66247 ']' 00:11:44.884 06:41:58 -- common/autotest_common.sh@940 -- # kill -0 66247 00:11:44.884 06:41:58 -- common/autotest_common.sh@941 -- # uname 00:11:44.884 06:41:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:44.884 06:41:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66247 00:11:44.884 killing process with pid 66247 00:11:44.884 06:41:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:44.884 06:41:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:44.884 06:41:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66247' 00:11:44.884 06:41:58 -- common/autotest_common.sh@955 -- # kill 66247 00:11:44.884 06:41:58 -- common/autotest_common.sh@960 -- # wait 66247 00:11:45.143 06:41:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:45.143 06:41:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:45.143 06:41:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:45.143 06:41:59 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:45.143 06:41:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:45.143 06:41:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.143 06:41:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:45.143 06:41:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.143 06:41:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:45.143 06:41:59 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:11:45.143 00:11:45.143 real 0m2.771s 00:11:45.143 user 0m3.061s 00:11:45.143 sys 0m0.570s 00:11:45.143 06:41:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:45.143 ************************************ 00:11:45.143 END TEST nvmf_fuzz 00:11:45.143 ************************************ 00:11:45.143 06:41:59 -- common/autotest_common.sh@10 -- # set +x 00:11:45.403 06:41:59 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:11:45.403 06:41:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:45.403 06:41:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:45.403 06:41:59 -- common/autotest_common.sh@10 -- # set +x 00:11:45.403 ************************************ 00:11:45.403 START TEST nvmf_multiconnection 00:11:45.403 ************************************ 00:11:45.403 06:41:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:11:45.403 * Looking for test storage... 00:11:45.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:45.403 06:41:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:45.403 06:41:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:45.403 06:41:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:45.403 06:41:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:45.403 06:41:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:45.403 06:41:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:45.403 06:41:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:45.403 06:41:59 -- scripts/common.sh@335 -- # IFS=.-: 00:11:45.403 06:41:59 -- scripts/common.sh@335 -- # read -ra ver1 00:11:45.403 06:41:59 -- scripts/common.sh@336 -- # IFS=.-: 00:11:45.403 06:41:59 -- scripts/common.sh@336 -- # read -ra ver2 00:11:45.403 06:41:59 -- scripts/common.sh@337 -- # local 'op=<' 00:11:45.403 06:41:59 -- scripts/common.sh@339 -- # ver1_l=2 00:11:45.403 06:41:59 -- scripts/common.sh@340 -- # ver2_l=1 00:11:45.403 06:41:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:45.403 06:41:59 -- scripts/common.sh@343 -- # case "$op" in 00:11:45.403 06:41:59 -- scripts/common.sh@344 -- # : 1 00:11:45.403 06:41:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:45.403 06:41:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:45.403 06:41:59 -- scripts/common.sh@364 -- # decimal 1 00:11:45.403 06:41:59 -- scripts/common.sh@352 -- # local d=1 00:11:45.403 06:41:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:45.403 06:41:59 -- scripts/common.sh@354 -- # echo 1 00:11:45.403 06:41:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:45.403 06:41:59 -- scripts/common.sh@365 -- # decimal 2 00:11:45.403 06:41:59 -- scripts/common.sh@352 -- # local d=2 00:11:45.403 06:41:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:45.403 06:41:59 -- scripts/common.sh@354 -- # echo 2 00:11:45.403 06:41:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:45.403 06:41:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:45.403 06:41:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:45.403 06:41:59 -- scripts/common.sh@367 -- # return 0 00:11:45.403 06:41:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:45.403 06:41:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:45.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.403 --rc genhtml_branch_coverage=1 00:11:45.403 --rc genhtml_function_coverage=1 00:11:45.403 --rc genhtml_legend=1 00:11:45.403 --rc geninfo_all_blocks=1 00:11:45.403 --rc geninfo_unexecuted_blocks=1 00:11:45.403 00:11:45.403 ' 00:11:45.403 06:41:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:45.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.403 --rc genhtml_branch_coverage=1 00:11:45.403 --rc genhtml_function_coverage=1 00:11:45.403 --rc genhtml_legend=1 00:11:45.403 --rc geninfo_all_blocks=1 00:11:45.403 --rc geninfo_unexecuted_blocks=1 00:11:45.403 00:11:45.403 ' 00:11:45.403 06:41:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:45.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.403 --rc genhtml_branch_coverage=1 00:11:45.403 --rc genhtml_function_coverage=1 00:11:45.403 --rc genhtml_legend=1 00:11:45.403 --rc geninfo_all_blocks=1 00:11:45.403 --rc geninfo_unexecuted_blocks=1 00:11:45.403 00:11:45.403 ' 00:11:45.403 06:41:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:45.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.403 --rc genhtml_branch_coverage=1 00:11:45.403 --rc genhtml_function_coverage=1 00:11:45.403 --rc genhtml_legend=1 00:11:45.403 --rc geninfo_all_blocks=1 00:11:45.403 --rc geninfo_unexecuted_blocks=1 00:11:45.403 00:11:45.403 ' 00:11:45.403 06:41:59 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:45.403 06:41:59 -- nvmf/common.sh@7 -- # uname -s 00:11:45.403 06:41:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.403 06:41:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.403 06:41:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.403 06:41:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.403 06:41:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.403 06:41:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.403 06:41:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.403 06:41:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.403 06:41:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.403 06:41:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.403 06:41:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 00:11:45.403 06:41:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=1897a557-42a7-4044-982a-fbab8b2b3e32 00:11:45.403 06:41:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.403 06:41:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.403 06:41:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:45.403 06:41:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:45.403 06:41:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.403 06:41:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.403 06:41:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.403 06:41:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.403 06:41:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.403 06:41:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.403 06:41:59 -- paths/export.sh@5 -- # export PATH 00:11:45.403 06:41:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.403 06:41:59 -- nvmf/common.sh@46 -- # : 0 00:11:45.403 06:41:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:45.403 06:41:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:45.403 06:41:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:45.403 06:41:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.403 06:41:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.403 06:41:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:45.403 06:41:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:45.403 06:41:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:45.403 06:41:59 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:45.403 06:41:59 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:45.403 06:41:59 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:11:45.403 06:41:59 -- target/multiconnection.sh@16 -- # nvmftestinit 00:11:45.403 06:41:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:45.403 06:41:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:45.403 06:41:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:45.403 06:41:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:45.403 06:41:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:45.403 06:41:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.403 06:41:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:45.403 06:41:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.403 06:41:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:45.403 06:41:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:45.403 06:41:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:45.403 06:41:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:45.403 06:41:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:45.403 06:41:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:45.403 06:41:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:45.403 06:41:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:45.403 06:41:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:45.404 06:41:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:45.404 06:41:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:45.404 06:41:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:45.404 06:41:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:45.404 06:41:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:45.404 06:41:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:45.404 06:41:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:45.404 06:41:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:45.404 06:41:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:45.404 06:41:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:45.404 06:41:59 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:45.404 Cannot find device "nvmf_tgt_br" 00:11:45.404 06:41:59 -- nvmf/common.sh@154 -- # true 00:11:45.404 06:41:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:45.404 Cannot find device "nvmf_tgt_br2" 00:11:45.404 06:41:59 -- nvmf/common.sh@155 -- # true 00:11:45.404 06:41:59 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:45.404 06:41:59 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:45.663 Cannot find device "nvmf_tgt_br" 00:11:45.663 06:41:59 -- nvmf/common.sh@157 -- # true 00:11:45.663 06:41:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:45.663 Cannot find device "nvmf_tgt_br2" 00:11:45.663 06:41:59 -- nvmf/common.sh@158 -- # true 00:11:45.663 06:41:59 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:45.663 06:41:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:45.663 06:41:59 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:45.663 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:45.663 06:41:59 -- nvmf/common.sh@161 -- # true 00:11:45.663 06:41:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:45.663 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:45.663 06:41:59 -- nvmf/common.sh@162 -- # true 00:11:45.663 06:41:59 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:45.663 06:41:59 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:45.663 06:41:59 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:45.663 06:41:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:45.663 06:41:59 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:45.663 06:41:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:45.663 06:41:59 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:45.663 06:41:59 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:45.663 06:41:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:45.663 06:41:59 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:45.663 06:41:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:45.663 06:41:59 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:45.663 06:41:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:45.663 06:41:59 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:45.663 06:41:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:45.663 06:41:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:45.663 06:41:59 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:45.663 06:41:59 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:45.663 06:41:59 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:45.663 06:41:59 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:45.663 06:41:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:45.663 06:41:59 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:45.663 06:41:59 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:45.922 06:41:59 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:45.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:45.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:11:45.922 00:11:45.922 --- 10.0.0.2 ping statistics --- 00:11:45.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.922 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:11:45.922 06:41:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:45.922 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:45.922 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:11:45.922 00:11:45.922 --- 10.0.0.3 ping statistics --- 00:11:45.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.922 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:11:45.922 06:41:59 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:45.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:45.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:11:45.922 00:11:45.922 --- 10.0.0.1 ping statistics --- 00:11:45.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.922 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:11:45.922 06:41:59 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:45.922 06:41:59 -- nvmf/common.sh@421 -- # return 0 00:11:45.922 06:41:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:45.922 06:41:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:45.922 06:41:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:45.922 06:41:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:45.922 06:41:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:45.922 06:41:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:45.922 06:41:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:45.922 06:41:59 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:11:45.922 06:41:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:45.922 06:41:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:45.922 06:41:59 -- common/autotest_common.sh@10 -- # set +x 00:11:45.922 06:41:59 -- nvmf/common.sh@469 -- # nvmfpid=66446 00:11:45.922 06:41:59 -- nvmf/common.sh@470 -- # waitforlisten 66446 00:11:45.922 06:41:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:45.922 06:41:59 -- common/autotest_common.sh@829 -- # '[' -z 66446 ']' 00:11:45.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.922 06:41:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.922 06:41:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:45.922 06:41:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.922 06:41:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:45.922 06:41:59 -- common/autotest_common.sh@10 -- # set +x 00:11:45.922 [2024-12-14 06:41:59.768696] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:45.922 [2024-12-14 06:41:59.769120] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.180 [2024-12-14 06:41:59.919975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:46.180 [2024-12-14 06:41:59.976311] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:46.181 [2024-12-14 06:41:59.976694] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:46.181 [2024-12-14 06:41:59.976715] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:46.181 [2024-12-14 06:41:59.976724] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:46.181 [2024-12-14 06:41:59.976904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:46.181 [2024-12-14 06:41:59.977182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:46.181 [2024-12-14 06:41:59.977506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:46.181 [2024-12-14 06:41:59.977537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.174 06:42:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:47.174 06:42:00 -- common/autotest_common.sh@862 -- # return 0 00:11:47.174 06:42:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:47.174 06:42:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:47.174 06:42:00 -- common/autotest_common.sh@10 -- # set +x 00:11:47.174 06:42:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:47.174 06:42:00 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:47.174 06:42:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.174 06:42:00 -- common/autotest_common.sh@10 -- # set +x 00:11:47.174 [2024-12-14 06:42:00.809459] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:47.174 06:42:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.174 06:42:00 -- target/multiconnection.sh@21 -- # seq 1 11 00:11:47.174 06:42:00 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:47.174 06:42:00 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:47.174 06:42:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.174 06:42:00 -- common/autotest_common.sh@10 -- # set +x 00:11:47.174 Malloc1 00:11:47.174 06:42:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.174 06:42:00 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:11:47.174 06:42:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.174 06:42:00 -- common/autotest_common.sh@10 -- # set +x 00:11:47.174 06:42:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.174 06:42:00 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:47.174 06:42:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.174 06:42:00 -- common/autotest_common.sh@10 -- # set +x 00:11:47.174 06:42:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.174 06:42:00 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:47.174 06:42:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.174 06:42:00 -- common/autotest_common.sh@10 -- # set +x 00:11:47.174 [2024-12-14 06:42:00.873012] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:47.174 06:42:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.174 06:42:00 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:47.174 06:42:00 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:11:47.174 06:42:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.174 06:42:00 -- common/autotest_common.sh@10 -- # set +x 00:11:47.174 Malloc2 00:11:47.174 06:42:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.174 06:42:00 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:47.174 06:42:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.174 06:42:00 -- common/autotest_common.sh@10 -- # set +x 00:11:47.174 06:42:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.174 06:42:00 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:11:47.174 06:42:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.174 06:42:00 -- common/autotest_common.sh@10 -- # set +x 00:11:47.174 06:42:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.174 06:42:00 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:47.174 06:42:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.174 06:42:00 -- common/autotest_common.sh@10 -- # set +x 00:11:47.174 06:42:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.174 06:42:00 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:47.174 06:42:00 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:11:47.174 06:42:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.174 06:42:00 -- common/autotest_common.sh@10 -- # set +x 00:11:47.174 Malloc3 00:11:47.174 06:42:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.174 06:42:00 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:11:47.174 06:42:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.174 06:42:00 -- common/autotest_common.sh@10 -- # set +x 00:11:47.174 06:42:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.174 06:42:00 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:11:47.174 06:42:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.174 06:42:00 -- common/autotest_common.sh@10 -- # set +x 00:11:47.174 06:42:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.174 06:42:00 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:47.174 06:42:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.174 06:42:00 -- common/autotest_common.sh@10 -- # set +x 00:11:47.174 06:42:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.174 06:42:00 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:47.174 06:42:00 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:11:47.174 06:42:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.174 06:42:00 -- common/autotest_common.sh@10 -- # set +x 00:11:47.174 Malloc4 00:11:47.174 06:42:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.174 06:42:00 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:11:47.174 06:42:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.174 06:42:00 -- common/autotest_common.sh@10 -- # set +x 00:11:47.174 06:42:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.174 06:42:00 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:11:47.175 06:42:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.175 06:42:00 -- common/autotest_common.sh@10 -- # set +x 00:11:47.175 06:42:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.175 06:42:00 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:47.175 06:42:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.175 06:42:00 -- common/autotest_common.sh@10 -- # set +x 00:11:47.175 06:42:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.175 06:42:00 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:47.175 06:42:00 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:11:47.175 06:42:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.175 06:42:00 -- common/autotest_common.sh@10 -- # set +x 00:11:47.175 Malloc5 00:11:47.175 06:42:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.175 06:42:01 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:11:47.175 06:42:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.175 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:11:47.175 06:42:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.175 06:42:01 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:11:47.175 06:42:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.175 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:11:47.175 06:42:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.175 06:42:01 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:11:47.175 06:42:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.175 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:11:47.175 06:42:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.175 06:42:01 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:47.175 06:42:01 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:11:47.175 06:42:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.175 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:11:47.175 Malloc6 00:11:47.175 06:42:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.175 06:42:01 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:11:47.175 06:42:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.175 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:11:47.175 06:42:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.175 06:42:01 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:11:47.175 06:42:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.175 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:11:47.175 06:42:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.175 06:42:01 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:11:47.175 06:42:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.175 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:11:47.175 06:42:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.175 06:42:01 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:47.175 06:42:01 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:11:47.175 06:42:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.175 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:11:47.175 Malloc7 00:11:47.175 06:42:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.175 06:42:01 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:11:47.175 06:42:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.175 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:11:47.175 06:42:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.175 06:42:01 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:11:47.175 06:42:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.175 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:11:47.175 06:42:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.175 06:42:01 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:11:47.175 06:42:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.175 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:11:47.175 06:42:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.175 06:42:01 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:47.175 06:42:01 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:11:47.175 06:42:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.175 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:11:47.175 Malloc8 00:11:47.175 06:42:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.175 06:42:01 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:11:47.175 06:42:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.175 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:11:47.175 06:42:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.175 06:42:01 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:11:47.175 06:42:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.175 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:11:47.175 06:42:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.175 06:42:01 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:11:47.175 06:42:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.175 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:11:47.175 06:42:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.175 06:42:01 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:47.175 06:42:01 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:11:47.175 06:42:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.175 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:11:47.434 Malloc9 00:11:47.434 06:42:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.434 06:42:01 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:11:47.434 06:42:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.434 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:11:47.434 06:42:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.434 06:42:01 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:11:47.434 06:42:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.434 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:11:47.434 06:42:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.434 06:42:01 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:11:47.434 06:42:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.434 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:11:47.434 06:42:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.434 06:42:01 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:47.434 06:42:01 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:11:47.434 06:42:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.434 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:11:47.434 Malloc10 00:11:47.434 06:42:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.434 06:42:01 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:11:47.434 06:42:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.434 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:11:47.434 06:42:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.434 06:42:01 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:11:47.434 06:42:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.434 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:11:47.434 06:42:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.434 06:42:01 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:11:47.434 06:42:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.434 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:11:47.434 06:42:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.434 06:42:01 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:47.434 06:42:01 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:11:47.434 06:42:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.434 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:11:47.434 Malloc11 00:11:47.434 06:42:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.434 06:42:01 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:11:47.434 06:42:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.434 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:11:47.434 06:42:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.434 06:42:01 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:11:47.434 06:42:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.434 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:11:47.434 06:42:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.434 06:42:01 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:11:47.434 06:42:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.434 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:11:47.434 06:42:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.434 06:42:01 -- target/multiconnection.sh@28 -- # seq 1 11 00:11:47.434 06:42:01 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:47.434 06:42:01 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 --hostid=1897a557-42a7-4044-982a-fbab8b2b3e32 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:47.434 06:42:01 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:11:47.434 06:42:01 -- common/autotest_common.sh@1187 -- # local i=0 00:11:47.434 06:42:01 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:47.434 06:42:01 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:47.434 06:42:01 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:49.968 06:42:03 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:49.968 06:42:03 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:49.968 06:42:03 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:11:49.968 06:42:03 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:49.968 06:42:03 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:49.968 06:42:03 -- common/autotest_common.sh@1197 -- # return 0 00:11:49.968 06:42:03 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:49.968 06:42:03 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 --hostid=1897a557-42a7-4044-982a-fbab8b2b3e32 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:11:49.968 06:42:03 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:11:49.968 06:42:03 -- common/autotest_common.sh@1187 -- # local i=0 00:11:49.968 06:42:03 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:49.968 06:42:03 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:49.968 06:42:03 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:51.870 06:42:05 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:51.870 06:42:05 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:51.870 06:42:05 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:11:51.870 06:42:05 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:51.870 06:42:05 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:51.870 06:42:05 -- common/autotest_common.sh@1197 -- # return 0 00:11:51.870 06:42:05 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:51.870 06:42:05 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 --hostid=1897a557-42a7-4044-982a-fbab8b2b3e32 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:11:51.870 06:42:05 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:11:51.870 06:42:05 -- common/autotest_common.sh@1187 -- # local i=0 00:11:51.870 06:42:05 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:51.870 06:42:05 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:51.870 06:42:05 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:53.772 06:42:07 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:53.772 06:42:07 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:53.772 06:42:07 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:11:54.031 06:42:07 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:54.031 06:42:07 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.031 06:42:07 -- common/autotest_common.sh@1197 -- # return 0 00:11:54.031 06:42:07 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:54.031 06:42:07 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 --hostid=1897a557-42a7-4044-982a-fbab8b2b3e32 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:11:54.031 06:42:07 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:11:54.031 06:42:07 -- common/autotest_common.sh@1187 -- # local i=0 00:11:54.031 06:42:07 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.031 06:42:07 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:54.031 06:42:07 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:55.934 06:42:09 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:55.934 06:42:09 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:55.934 06:42:09 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:11:56.193 06:42:09 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:56.193 06:42:09 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:56.193 06:42:09 -- common/autotest_common.sh@1197 -- # return 0 00:11:56.193 06:42:09 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:56.193 06:42:09 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 --hostid=1897a557-42a7-4044-982a-fbab8b2b3e32 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:11:56.193 06:42:10 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:11:56.193 06:42:10 -- common/autotest_common.sh@1187 -- # local i=0 00:11:56.193 06:42:10 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:56.193 06:42:10 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:56.193 06:42:10 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:58.095 06:42:12 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:58.095 06:42:12 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:58.095 06:42:12 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:11:58.353 06:42:12 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:58.353 06:42:12 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:58.353 06:42:12 -- common/autotest_common.sh@1197 -- # return 0 00:11:58.353 06:42:12 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:11:58.353 06:42:12 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 --hostid=1897a557-42a7-4044-982a-fbab8b2b3e32 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:11:58.353 06:42:12 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:11:58.353 06:42:12 -- common/autotest_common.sh@1187 -- # local i=0 00:11:58.353 06:42:12 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:58.353 06:42:12 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:58.353 06:42:12 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:00.276 06:42:14 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:00.276 06:42:14 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:12:00.276 06:42:14 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:00.533 06:42:14 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:00.533 06:42:14 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:00.533 06:42:14 -- common/autotest_common.sh@1197 -- # return 0 00:12:00.533 06:42:14 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:00.533 06:42:14 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 --hostid=1897a557-42a7-4044-982a-fbab8b2b3e32 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:12:00.533 06:42:14 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:12:00.533 06:42:14 -- common/autotest_common.sh@1187 -- # local i=0 00:12:00.533 06:42:14 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.533 06:42:14 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:00.533 06:42:14 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:03.062 06:42:16 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:03.062 06:42:16 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:03.062 06:42:16 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:12:03.062 06:42:16 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:03.062 06:42:16 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:03.062 06:42:16 -- common/autotest_common.sh@1197 -- # return 0 00:12:03.062 06:42:16 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:03.062 06:42:16 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 --hostid=1897a557-42a7-4044-982a-fbab8b2b3e32 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:12:03.062 06:42:16 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:12:03.062 06:42:16 -- common/autotest_common.sh@1187 -- # local i=0 00:12:03.062 06:42:16 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:03.062 06:42:16 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:03.062 06:42:16 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:04.961 06:42:18 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:04.961 06:42:18 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:04.961 06:42:18 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:12:04.961 06:42:18 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:04.961 06:42:18 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:04.961 06:42:18 -- common/autotest_common.sh@1197 -- # return 0 00:12:04.961 06:42:18 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:04.961 06:42:18 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 --hostid=1897a557-42a7-4044-982a-fbab8b2b3e32 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:12:04.961 06:42:18 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:12:04.961 06:42:18 -- common/autotest_common.sh@1187 -- # local i=0 00:12:04.961 06:42:18 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:04.961 06:42:18 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:04.961 06:42:18 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:06.863 06:42:20 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:06.863 06:42:20 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:06.863 06:42:20 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:12:06.863 06:42:20 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:06.863 06:42:20 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:06.863 06:42:20 -- common/autotest_common.sh@1197 -- # return 0 00:12:06.863 06:42:20 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:06.863 06:42:20 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 --hostid=1897a557-42a7-4044-982a-fbab8b2b3e32 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:12:07.121 06:42:20 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:12:07.121 06:42:20 -- common/autotest_common.sh@1187 -- # local i=0 00:12:07.121 06:42:20 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:07.121 06:42:20 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:07.121 06:42:20 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:09.022 06:42:22 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:09.022 06:42:22 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:09.022 06:42:22 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:12:09.022 06:42:22 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:09.022 06:42:22 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:09.022 06:42:22 -- common/autotest_common.sh@1197 -- # return 0 00:12:09.022 06:42:22 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:09.022 06:42:22 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 --hostid=1897a557-42a7-4044-982a-fbab8b2b3e32 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:12:09.281 06:42:23 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:12:09.281 06:42:23 -- common/autotest_common.sh@1187 -- # local i=0 00:12:09.281 06:42:23 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:09.281 06:42:23 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:09.281 06:42:23 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:11.183 06:42:25 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:11.183 06:42:25 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:12:11.183 06:42:25 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:11.183 06:42:25 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:11.183 06:42:25 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:11.183 06:42:25 -- common/autotest_common.sh@1197 -- # return 0 00:12:11.183 06:42:25 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:12:11.183 [global] 00:12:11.183 thread=1 00:12:11.183 invalidate=1 00:12:11.183 rw=read 00:12:11.183 time_based=1 00:12:11.183 runtime=10 00:12:11.183 ioengine=libaio 00:12:11.183 direct=1 00:12:11.183 bs=262144 00:12:11.183 iodepth=64 00:12:11.183 norandommap=1 00:12:11.183 numjobs=1 00:12:11.183 00:12:11.183 [job0] 00:12:11.183 filename=/dev/nvme0n1 00:12:11.183 [job1] 00:12:11.183 filename=/dev/nvme10n1 00:12:11.440 [job2] 00:12:11.440 filename=/dev/nvme1n1 00:12:11.440 [job3] 00:12:11.440 filename=/dev/nvme2n1 00:12:11.440 [job4] 00:12:11.440 filename=/dev/nvme3n1 00:12:11.440 [job5] 00:12:11.440 filename=/dev/nvme4n1 00:12:11.440 [job6] 00:12:11.440 filename=/dev/nvme5n1 00:12:11.440 [job7] 00:12:11.440 filename=/dev/nvme6n1 00:12:11.440 [job8] 00:12:11.440 filename=/dev/nvme7n1 00:12:11.440 [job9] 00:12:11.440 filename=/dev/nvme8n1 00:12:11.440 [job10] 00:12:11.440 filename=/dev/nvme9n1 00:12:11.440 Could not set queue depth (nvme0n1) 00:12:11.440 Could not set queue depth (nvme10n1) 00:12:11.440 Could not set queue depth (nvme1n1) 00:12:11.440 Could not set queue depth (nvme2n1) 00:12:11.440 Could not set queue depth (nvme3n1) 00:12:11.440 Could not set queue depth (nvme4n1) 00:12:11.440 Could not set queue depth (nvme5n1) 00:12:11.440 Could not set queue depth (nvme6n1) 00:12:11.440 Could not set queue depth (nvme7n1) 00:12:11.440 Could not set queue depth (nvme8n1) 00:12:11.440 Could not set queue depth (nvme9n1) 00:12:11.697 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:11.697 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:11.697 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:11.697 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:11.697 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:11.697 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:11.697 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:11.697 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:11.697 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:11.697 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:11.697 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:11.697 fio-3.35 00:12:11.697 Starting 11 threads 00:12:23.928 00:12:23.928 job0: (groupid=0, jobs=1): err= 0: pid=66907: Sat Dec 14 06:42:35 2024 00:12:23.928 read: IOPS=642, BW=161MiB/s (168MB/s)(1610MiB/10024msec) 00:12:23.928 slat (usec): min=18, max=45450, avg=1528.92, stdev=3595.37 00:12:23.928 clat (msec): min=9, max=187, avg=97.96, stdev=24.16 00:12:23.928 lat (msec): min=9, max=187, avg=99.49, stdev=24.56 00:12:23.928 clat percentiles (msec): 00:12:23.928 | 1.00th=[ 45], 5.00th=[ 78], 10.00th=[ 81], 20.00th=[ 84], 00:12:23.928 | 30.00th=[ 86], 40.00th=[ 88], 50.00th=[ 90], 60.00th=[ 92], 00:12:23.928 | 70.00th=[ 95], 80.00th=[ 109], 90.00th=[ 142], 95.00th=[ 146], 00:12:23.928 | 99.00th=[ 155], 99.50th=[ 159], 99.90th=[ 167], 99.95th=[ 171], 00:12:23.928 | 99.99th=[ 188] 00:12:23.928 bw ( KiB/s): min=110300, max=193024, per=8.98%, avg=163200.20, stdev=31969.48, samples=20 00:12:23.928 iops : min= 430, max= 754, avg=637.40, stdev=124.92, samples=20 00:12:23.928 lat (msec) : 10=0.02%, 20=0.56%, 50=0.51%, 100=76.88%, 250=22.04% 00:12:23.928 cpu : usr=0.48%, sys=2.94%, ctx=1468, majf=0, minf=4097 00:12:23.928 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:12:23.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.928 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:23.928 issued rwts: total=6439,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.928 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:23.928 job1: (groupid=0, jobs=1): err= 0: pid=66908: Sat Dec 14 06:42:35 2024 00:12:23.928 read: IOPS=790, BW=198MiB/s (207MB/s)(1993MiB/10087msec) 00:12:23.928 slat (usec): min=16, max=115567, avg=1250.07, stdev=3285.60 00:12:23.928 clat (msec): min=13, max=194, avg=79.63, stdev=36.85 00:12:23.928 lat (msec): min=13, max=219, avg=80.88, stdev=37.42 00:12:23.928 clat percentiles (msec): 00:12:23.928 | 1.00th=[ 28], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 33], 00:12:23.928 | 30.00th=[ 55], 40.00th=[ 62], 50.00th=[ 73], 60.00th=[ 109], 00:12:23.928 | 70.00th=[ 112], 80.00th=[ 115], 90.00th=[ 118], 95.00th=[ 123], 00:12:23.928 | 99.00th=[ 144], 99.50th=[ 161], 99.90th=[ 186], 99.95th=[ 186], 00:12:23.928 | 99.99th=[ 194] 00:12:23.928 bw ( KiB/s): min=131072, max=513024, per=11.14%, avg=202380.95, stdev=112802.10, samples=20 00:12:23.928 iops : min= 512, max= 2004, avg=790.50, stdev=440.65, samples=20 00:12:23.928 lat (msec) : 20=0.40%, 50=26.65%, 100=24.09%, 250=48.86% 00:12:23.928 cpu : usr=0.38%, sys=2.55%, ctx=1734, majf=0, minf=4097 00:12:23.928 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:23.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.928 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:23.928 issued rwts: total=7970,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.928 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:23.928 job2: (groupid=0, jobs=1): err= 0: pid=66909: Sat Dec 14 06:42:35 2024 00:12:23.928 read: IOPS=523, BW=131MiB/s (137MB/s)(1322MiB/10091msec) 00:12:23.928 slat (usec): min=17, max=75661, avg=1885.57, stdev=4459.59 00:12:23.928 clat (msec): min=55, max=214, avg=120.10, stdev=15.65 00:12:23.928 lat (msec): min=56, max=214, avg=121.98, stdev=16.08 00:12:23.928 clat percentiles (msec): 00:12:23.928 | 1.00th=[ 99], 5.00th=[ 105], 10.00th=[ 107], 20.00th=[ 109], 00:12:23.928 | 30.00th=[ 111], 40.00th=[ 113], 50.00th=[ 114], 60.00th=[ 117], 00:12:23.928 | 70.00th=[ 123], 80.00th=[ 138], 90.00th=[ 144], 95.00th=[ 150], 00:12:23.928 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 197], 99.95th=[ 205], 00:12:23.928 | 99.99th=[ 215] 00:12:23.928 bw ( KiB/s): min=102605, max=145920, per=7.36%, avg=133716.00, stdev=15034.99, samples=20 00:12:23.928 iops : min= 400, max= 570, avg=522.20, stdev=58.76, samples=20 00:12:23.928 lat (msec) : 100=1.59%, 250=98.41% 00:12:23.928 cpu : usr=0.34%, sys=2.49%, ctx=1232, majf=0, minf=4097 00:12:23.928 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:23.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.928 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:23.928 issued rwts: total=5287,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.928 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:23.928 job3: (groupid=0, jobs=1): err= 0: pid=66910: Sat Dec 14 06:42:35 2024 00:12:23.928 read: IOPS=522, BW=131MiB/s (137MB/s)(1317MiB/10087msec) 00:12:23.928 slat (usec): min=19, max=79868, avg=1893.52, stdev=4887.85 00:12:23.928 clat (msec): min=35, max=220, avg=120.51, stdev=15.72 00:12:23.928 lat (msec): min=35, max=220, avg=122.41, stdev=16.26 00:12:23.928 clat percentiles (msec): 00:12:23.928 | 1.00th=[ 99], 5.00th=[ 105], 10.00th=[ 107], 20.00th=[ 110], 00:12:23.928 | 30.00th=[ 112], 40.00th=[ 113], 50.00th=[ 115], 60.00th=[ 118], 00:12:23.928 | 70.00th=[ 123], 80.00th=[ 138], 90.00th=[ 144], 95.00th=[ 150], 00:12:23.928 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 194], 99.95th=[ 194], 00:12:23.928 | 99.99th=[ 222] 00:12:23.928 bw ( KiB/s): min=102912, max=148480, per=7.34%, avg=133248.00, stdev=15634.10, samples=20 00:12:23.928 iops : min= 402, max= 580, avg=520.50, stdev=61.07, samples=20 00:12:23.928 lat (msec) : 50=0.27%, 100=1.27%, 250=98.46% 00:12:23.928 cpu : usr=0.30%, sys=1.88%, ctx=1245, majf=0, minf=4097 00:12:23.928 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:23.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.928 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:23.928 issued rwts: total=5268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.928 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:23.928 job4: (groupid=0, jobs=1): err= 0: pid=66911: Sat Dec 14 06:42:35 2024 00:12:23.928 read: IOPS=524, BW=131MiB/s (137MB/s)(1324MiB/10100msec) 00:12:23.928 slat (usec): min=16, max=50200, avg=1884.50, stdev=4215.77 00:12:23.928 clat (msec): min=23, max=202, avg=119.98, stdev=15.74 00:12:23.928 lat (msec): min=23, max=213, avg=121.86, stdev=16.10 00:12:23.928 clat percentiles (msec): 00:12:23.928 | 1.00th=[ 93], 5.00th=[ 105], 10.00th=[ 107], 20.00th=[ 110], 00:12:23.928 | 30.00th=[ 111], 40.00th=[ 113], 50.00th=[ 115], 60.00th=[ 118], 00:12:23.928 | 70.00th=[ 123], 80.00th=[ 138], 90.00th=[ 144], 95.00th=[ 148], 00:12:23.928 | 99.00th=[ 159], 99.50th=[ 174], 99.90th=[ 197], 99.95th=[ 203], 00:12:23.928 | 99.99th=[ 203] 00:12:23.928 bw ( KiB/s): min=111104, max=146650, per=7.37%, avg=133921.25, stdev=13993.88, samples=20 00:12:23.928 iops : min= 434, max= 572, avg=523.00, stdev=54.56, samples=20 00:12:23.928 lat (msec) : 50=0.55%, 100=1.32%, 250=98.13% 00:12:23.928 cpu : usr=0.28%, sys=1.97%, ctx=1275, majf=0, minf=4097 00:12:23.928 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:23.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.928 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:23.928 issued rwts: total=5296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.928 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:23.928 job5: (groupid=0, jobs=1): err= 0: pid=66912: Sat Dec 14 06:42:35 2024 00:12:23.928 read: IOPS=590, BW=148MiB/s (155MB/s)(1491MiB/10095msec) 00:12:23.928 slat (usec): min=16, max=27467, avg=1674.39, stdev=3636.26 00:12:23.928 clat (msec): min=39, max=195, avg=106.53, stdev=15.61 00:12:23.928 lat (msec): min=39, max=202, avg=108.21, stdev=15.87 00:12:23.928 clat percentiles (msec): 00:12:23.928 | 1.00th=[ 62], 5.00th=[ 81], 10.00th=[ 85], 20.00th=[ 90], 00:12:23.928 | 30.00th=[ 105], 40.00th=[ 109], 50.00th=[ 111], 60.00th=[ 113], 00:12:23.928 | 70.00th=[ 115], 80.00th=[ 117], 90.00th=[ 121], 95.00th=[ 124], 00:12:23.928 | 99.00th=[ 132], 99.50th=[ 157], 99.90th=[ 192], 99.95th=[ 194], 00:12:23.928 | 99.99th=[ 194] 00:12:23.928 bw ( KiB/s): min=132608, max=193536, per=8.31%, avg=150964.10, stdev=19057.52, samples=20 00:12:23.928 iops : min= 518, max= 756, avg=589.55, stdev=74.46, samples=20 00:12:23.929 lat (msec) : 50=0.40%, 100=25.65%, 250=73.95% 00:12:23.929 cpu : usr=0.32%, sys=2.21%, ctx=1415, majf=0, minf=4097 00:12:23.929 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:12:23.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:23.929 issued rwts: total=5962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.929 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:23.929 job6: (groupid=0, jobs=1): err= 0: pid=66913: Sat Dec 14 06:42:35 2024 00:12:23.929 read: IOPS=762, BW=191MiB/s (200MB/s)(1911MiB/10025msec) 00:12:23.929 slat (usec): min=19, max=66222, avg=1288.25, stdev=3070.67 00:12:23.929 clat (msec): min=8, max=158, avg=82.47, stdev=18.98 00:12:23.929 lat (msec): min=8, max=165, avg=83.76, stdev=19.18 00:12:23.929 clat percentiles (msec): 00:12:23.929 | 1.00th=[ 28], 5.00th=[ 55], 10.00th=[ 58], 20.00th=[ 64], 00:12:23.929 | 30.00th=[ 78], 40.00th=[ 83], 50.00th=[ 86], 60.00th=[ 89], 00:12:23.929 | 70.00th=[ 91], 80.00th=[ 94], 90.00th=[ 99], 95.00th=[ 110], 00:12:23.929 | 99.00th=[ 142], 99.50th=[ 146], 99.90th=[ 150], 99.95th=[ 155], 00:12:23.929 | 99.99th=[ 159] 00:12:23.929 bw ( KiB/s): min=130048, max=273920, per=10.68%, avg=194045.00, stdev=36053.19, samples=20 00:12:23.929 iops : min= 508, max= 1070, avg=757.90, stdev=140.87, samples=20 00:12:23.929 lat (msec) : 10=0.01%, 20=0.20%, 50=2.64%, 100=89.30%, 250=7.85% 00:12:23.929 cpu : usr=0.29%, sys=2.77%, ctx=1652, majf=0, minf=4097 00:12:23.929 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:23.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:23.929 issued rwts: total=7645,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.929 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:23.929 job7: (groupid=0, jobs=1): err= 0: pid=66914: Sat Dec 14 06:42:35 2024 00:12:23.929 read: IOPS=1080, BW=270MiB/s (283MB/s)(2709MiB/10027msec) 00:12:23.929 slat (usec): min=16, max=23440, avg=919.68, stdev=2287.25 00:12:23.929 clat (msec): min=10, max=106, avg=58.23, stdev=28.40 00:12:23.929 lat (msec): min=11, max=109, avg=59.15, stdev=28.83 00:12:23.929 clat percentiles (msec): 00:12:23.929 | 1.00th=[ 28], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 32], 00:12:23.929 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 35], 60.00th=[ 82], 00:12:23.929 | 70.00th=[ 87], 80.00th=[ 90], 90.00th=[ 94], 95.00th=[ 96], 00:12:23.929 | 99.00th=[ 101], 99.50th=[ 103], 99.90th=[ 106], 99.95th=[ 106], 00:12:23.929 | 99.99th=[ 107] 00:12:23.929 bw ( KiB/s): min=176128, max=511488, per=15.18%, avg=275726.85, stdev=145438.53, samples=20 00:12:23.929 iops : min= 688, max= 1998, avg=1077.00, stdev=568.16, samples=20 00:12:23.929 lat (msec) : 20=0.06%, 50=53.04%, 100=45.94%, 250=0.95% 00:12:23.929 cpu : usr=0.45%, sys=3.51%, ctx=2262, majf=0, minf=4097 00:12:23.929 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:12:23.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:23.929 issued rwts: total=10835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.929 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:23.929 job8: (groupid=0, jobs=1): err= 0: pid=66915: Sat Dec 14 06:42:35 2024 00:12:23.929 read: IOPS=585, BW=146MiB/s (153MB/s)(1477MiB/10095msec) 00:12:23.929 slat (usec): min=20, max=44576, avg=1688.36, stdev=3698.73 00:12:23.929 clat (msec): min=17, max=209, avg=107.46, stdev=15.00 00:12:23.929 lat (msec): min=25, max=209, avg=109.15, stdev=15.26 00:12:23.929 clat percentiles (msec): 00:12:23.929 | 1.00th=[ 68], 5.00th=[ 82], 10.00th=[ 85], 20.00th=[ 92], 00:12:23.929 | 30.00th=[ 106], 40.00th=[ 110], 50.00th=[ 112], 60.00th=[ 113], 00:12:23.929 | 70.00th=[ 115], 80.00th=[ 118], 90.00th=[ 122], 95.00th=[ 125], 00:12:23.929 | 99.00th=[ 134], 99.50th=[ 150], 99.90th=[ 203], 99.95th=[ 209], 00:12:23.929 | 99.99th=[ 209] 00:12:23.929 bw ( KiB/s): min=131584, max=189952, per=8.24%, avg=149606.85, stdev=17251.26, samples=20 00:12:23.929 iops : min= 514, max= 742, avg=584.25, stdev=67.43, samples=20 00:12:23.929 lat (msec) : 20=0.02%, 100=25.30%, 250=74.68% 00:12:23.929 cpu : usr=0.30%, sys=2.23%, ctx=1397, majf=0, minf=4097 00:12:23.929 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:12:23.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:23.929 issued rwts: total=5908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.929 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:23.929 job9: (groupid=0, jobs=1): err= 0: pid=66921: Sat Dec 14 06:42:35 2024 00:12:23.929 read: IOPS=516, BW=129MiB/s (135MB/s)(1301MiB/10081msec) 00:12:23.929 slat (usec): min=17, max=80306, avg=1895.48, stdev=4464.23 00:12:23.929 clat (msec): min=42, max=219, avg=121.90, stdev=14.76 00:12:23.929 lat (msec): min=43, max=219, avg=123.79, stdev=15.16 00:12:23.929 clat percentiles (msec): 00:12:23.929 | 1.00th=[ 102], 5.00th=[ 107], 10.00th=[ 109], 20.00th=[ 112], 00:12:23.929 | 30.00th=[ 114], 40.00th=[ 116], 50.00th=[ 117], 60.00th=[ 121], 00:12:23.929 | 70.00th=[ 126], 80.00th=[ 138], 90.00th=[ 144], 95.00th=[ 148], 00:12:23.929 | 99.00th=[ 157], 99.50th=[ 169], 99.90th=[ 190], 99.95th=[ 190], 00:12:23.929 | 99.99th=[ 220] 00:12:23.929 bw ( KiB/s): min=106496, max=148480, per=7.25%, avg=131620.90, stdev=13952.47, samples=20 00:12:23.929 iops : min= 416, max= 580, avg=514.10, stdev=54.47, samples=20 00:12:23.929 lat (msec) : 50=0.42%, 100=0.29%, 250=99.29% 00:12:23.929 cpu : usr=0.24%, sys=2.02%, ctx=1236, majf=0, minf=4097 00:12:23.929 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:23.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:23.929 issued rwts: total=5205,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.929 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:23.929 job10: (groupid=0, jobs=1): err= 0: pid=66923: Sat Dec 14 06:42:35 2024 00:12:23.929 read: IOPS=579, BW=145MiB/s (152MB/s)(1462MiB/10095msec) 00:12:23.929 slat (usec): min=16, max=28359, avg=1705.67, stdev=3658.48 00:12:23.929 clat (msec): min=21, max=199, avg=108.49, stdev=15.54 00:12:23.929 lat (msec): min=22, max=215, avg=110.19, stdev=15.72 00:12:23.929 clat percentiles (msec): 00:12:23.929 | 1.00th=[ 70], 5.00th=[ 82], 10.00th=[ 86], 20.00th=[ 93], 00:12:23.929 | 30.00th=[ 107], 40.00th=[ 110], 50.00th=[ 112], 60.00th=[ 114], 00:12:23.929 | 70.00th=[ 116], 80.00th=[ 120], 90.00th=[ 124], 95.00th=[ 128], 00:12:23.929 | 99.00th=[ 140], 99.50th=[ 159], 99.90th=[ 201], 99.95th=[ 201], 00:12:23.929 | 99.99th=[ 201] 00:12:23.929 bw ( KiB/s): min=132096, max=189440, per=8.15%, avg=148053.80, stdev=16453.87, samples=20 00:12:23.929 iops : min= 516, max= 740, avg=578.20, stdev=64.34, samples=20 00:12:23.929 lat (msec) : 50=0.26%, 100=23.96%, 250=75.79% 00:12:23.929 cpu : usr=0.29%, sys=2.41%, ctx=1370, majf=0, minf=4097 00:12:23.929 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:12:23.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:23.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:23.929 issued rwts: total=5848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:23.929 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:23.929 00:12:23.929 Run status group 0 (all jobs): 00:12:23.929 READ: bw=1774MiB/s (1860MB/s), 129MiB/s-270MiB/s (135MB/s-283MB/s), io=17.5GiB (18.8GB), run=10024-10100msec 00:12:23.929 00:12:23.929 Disk stats (read/write): 00:12:23.929 nvme0n1: ios=12439/0, merge=0/0, ticks=1206603/0, in_queue=1206603, util=97.87% 00:12:23.929 nvme10n1: ios=15820/0, merge=0/0, ticks=1231755/0, in_queue=1231755, util=97.98% 00:12:23.929 nvme1n1: ios=10467/0, merge=0/0, ticks=1230418/0, in_queue=1230418, util=98.22% 00:12:23.929 nvme2n1: ios=10427/0, merge=0/0, ticks=1231203/0, in_queue=1231203, util=98.25% 00:12:23.929 nvme3n1: ios=10488/0, merge=0/0, ticks=1231020/0, in_queue=1231020, util=98.43% 00:12:23.929 nvme4n1: ios=11822/0, merge=0/0, ticks=1233903/0, in_queue=1233903, util=98.51% 00:12:23.929 nvme5n1: ios=14867/0, merge=0/0, ticks=1206072/0, in_queue=1206072, util=98.58% 00:12:23.929 nvme6n1: ios=21607/0, merge=0/0, ticks=1241813/0, in_queue=1241813, util=98.70% 00:12:23.929 nvme7n1: ios=11720/0, merge=0/0, ticks=1231303/0, in_queue=1231303, util=98.94% 00:12:23.929 nvme8n1: ios=10304/0, merge=0/0, ticks=1229599/0, in_queue=1229599, util=98.91% 00:12:23.929 nvme9n1: ios=11603/0, merge=0/0, ticks=1231907/0, in_queue=1231907, util=99.11% 00:12:23.929 06:42:35 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:12:23.929 [global] 00:12:23.929 thread=1 00:12:23.929 invalidate=1 00:12:23.929 rw=randwrite 00:12:23.929 time_based=1 00:12:23.929 runtime=10 00:12:23.929 ioengine=libaio 00:12:23.929 direct=1 00:12:23.929 bs=262144 00:12:23.929 iodepth=64 00:12:23.929 norandommap=1 00:12:23.929 numjobs=1 00:12:23.929 00:12:23.929 [job0] 00:12:23.929 filename=/dev/nvme0n1 00:12:23.929 [job1] 00:12:23.929 filename=/dev/nvme10n1 00:12:23.929 [job2] 00:12:23.929 filename=/dev/nvme1n1 00:12:23.929 [job3] 00:12:23.929 filename=/dev/nvme2n1 00:12:23.929 [job4] 00:12:23.929 filename=/dev/nvme3n1 00:12:23.929 [job5] 00:12:23.929 filename=/dev/nvme4n1 00:12:23.929 [job6] 00:12:23.929 filename=/dev/nvme5n1 00:12:23.929 [job7] 00:12:23.929 filename=/dev/nvme6n1 00:12:23.929 [job8] 00:12:23.929 filename=/dev/nvme7n1 00:12:23.929 [job9] 00:12:23.929 filename=/dev/nvme8n1 00:12:23.929 [job10] 00:12:23.929 filename=/dev/nvme9n1 00:12:23.929 Could not set queue depth (nvme0n1) 00:12:23.929 Could not set queue depth (nvme10n1) 00:12:23.929 Could not set queue depth (nvme1n1) 00:12:23.929 Could not set queue depth (nvme2n1) 00:12:23.929 Could not set queue depth (nvme3n1) 00:12:23.929 Could not set queue depth (nvme4n1) 00:12:23.929 Could not set queue depth (nvme5n1) 00:12:23.929 Could not set queue depth (nvme6n1) 00:12:23.929 Could not set queue depth (nvme7n1) 00:12:23.929 Could not set queue depth (nvme8n1) 00:12:23.929 Could not set queue depth (nvme9n1) 00:12:23.929 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:23.929 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:23.930 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:23.930 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:23.930 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:23.930 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:23.930 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:23.930 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:23.930 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:23.930 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:23.930 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:12:23.930 fio-3.35 00:12:23.930 Starting 11 threads 00:12:33.902 00:12:33.902 job0: (groupid=0, jobs=1): err= 0: pid=67118: Sat Dec 14 06:42:46 2024 00:12:33.902 write: IOPS=705, BW=176MiB/s (185MB/s)(1777MiB/10077msec); 0 zone resets 00:12:33.902 slat (usec): min=15, max=39905, avg=1388.24, stdev=2411.07 00:12:33.902 clat (msec): min=18, max=162, avg=89.31, stdev=12.35 00:12:33.902 lat (msec): min=18, max=162, avg=90.70, stdev=12.25 00:12:33.902 clat percentiles (msec): 00:12:33.902 | 1.00th=[ 81], 5.00th=[ 82], 10.00th=[ 82], 20.00th=[ 84], 00:12:33.902 | 30.00th=[ 86], 40.00th=[ 87], 50.00th=[ 87], 60.00th=[ 87], 00:12:33.902 | 70.00th=[ 88], 80.00th=[ 89], 90.00th=[ 110], 95.00th=[ 121], 00:12:33.902 | 99.00th=[ 136], 99.50th=[ 150], 99.90th=[ 159], 99.95th=[ 161], 00:12:33.902 | 99.99th=[ 163] 00:12:33.902 bw ( KiB/s): min=122368, max=190845, per=11.61%, avg=180396.65, stdev=19980.10, samples=20 00:12:33.902 iops : min= 478, max= 745, avg=704.65, stdev=78.03, samples=20 00:12:33.902 lat (msec) : 20=0.06%, 50=0.23%, 100=89.11%, 250=10.61% 00:12:33.902 cpu : usr=1.31%, sys=2.05%, ctx=8800, majf=0, minf=1 00:12:33.902 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:33.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:33.902 issued rwts: total=0,7109,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:33.902 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:33.902 job1: (groupid=0, jobs=1): err= 0: pid=67119: Sat Dec 14 06:42:46 2024 00:12:33.902 write: IOPS=782, BW=196MiB/s (205MB/s)(1971MiB/10076msec); 0 zone resets 00:12:33.902 slat (usec): min=17, max=54130, avg=1263.43, stdev=2233.07 00:12:33.902 clat (msec): min=49, max=162, avg=80.52, stdev=12.73 00:12:33.902 lat (msec): min=50, max=162, avg=81.78, stdev=12.74 00:12:33.902 clat percentiles (msec): 00:12:33.902 | 1.00th=[ 52], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 81], 00:12:33.902 | 30.00th=[ 82], 40.00th=[ 85], 50.00th=[ 86], 60.00th=[ 86], 00:12:33.902 | 70.00th=[ 87], 80.00th=[ 87], 90.00th=[ 87], 95.00th=[ 88], 00:12:33.902 | 99.00th=[ 115], 99.50th=[ 125], 99.90th=[ 153], 99.95th=[ 157], 00:12:33.902 | 99.99th=[ 163] 00:12:33.902 bw ( KiB/s): min=186880, max=303616, per=12.89%, avg=200212.45, stdev=29007.69, samples=20 00:12:33.902 iops : min= 730, max= 1186, avg=782.05, stdev=113.31, samples=20 00:12:33.902 lat (msec) : 50=0.03%, 100=98.17%, 250=1.80% 00:12:33.902 cpu : usr=1.40%, sys=1.87%, ctx=9206, majf=0, minf=1 00:12:33.902 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:33.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:33.902 issued rwts: total=0,7883,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:33.902 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:33.902 job2: (groupid=0, jobs=1): err= 0: pid=67131: Sat Dec 14 06:42:46 2024 00:12:33.902 write: IOPS=346, BW=86.7MiB/s (90.9MB/s)(881MiB/10156msec); 0 zone resets 00:12:33.902 slat (usec): min=18, max=77323, avg=2805.30, stdev=5031.65 00:12:33.902 clat (msec): min=26, max=333, avg=181.63, stdev=20.48 00:12:33.902 lat (msec): min=26, max=333, avg=184.43, stdev=20.28 00:12:33.902 clat percentiles (msec): 00:12:33.902 | 1.00th=[ 90], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 176], 00:12:33.902 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 186], 00:12:33.902 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 188], 00:12:33.902 | 99.00th=[ 230], 99.50th=[ 288], 99.90th=[ 321], 99.95th=[ 334], 00:12:33.902 | 99.99th=[ 334] 00:12:33.902 bw ( KiB/s): min=86016, max=101376, per=5.70%, avg=88550.40, stdev=3140.73, samples=20 00:12:33.902 iops : min= 336, max= 396, avg=345.90, stdev=12.27, samples=20 00:12:33.902 lat (msec) : 50=0.40%, 100=0.91%, 250=97.84%, 500=0.85% 00:12:33.902 cpu : usr=0.59%, sys=1.03%, ctx=5163, majf=0, minf=1 00:12:33.902 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:12:33.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:33.902 issued rwts: total=0,3522,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:33.902 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:33.902 job3: (groupid=0, jobs=1): err= 0: pid=67132: Sat Dec 14 06:42:46 2024 00:12:33.902 write: IOPS=344, BW=86.1MiB/s (90.3MB/s)(875MiB/10164msec); 0 zone resets 00:12:33.902 slat (usec): min=20, max=26107, avg=2851.56, stdev=4925.74 00:12:33.902 clat (msec): min=28, max=344, avg=182.87, stdev=19.99 00:12:33.902 lat (msec): min=28, max=344, avg=185.72, stdev=19.69 00:12:33.902 clat percentiles (msec): 00:12:33.902 | 1.00th=[ 82], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 176], 00:12:33.902 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 186], 00:12:33.902 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:12:33.902 | 99.00th=[ 241], 99.50th=[ 300], 99.90th=[ 334], 99.95th=[ 347], 00:12:33.902 | 99.99th=[ 347] 00:12:33.902 bw ( KiB/s): min=86016, max=92344, per=5.67%, avg=88022.00, stdev=1241.67, samples=20 00:12:33.902 iops : min= 336, max= 360, avg=343.80, stdev= 4.72, samples=20 00:12:33.902 lat (msec) : 50=0.46%, 100=0.80%, 250=97.77%, 500=0.97% 00:12:33.902 cpu : usr=0.67%, sys=1.08%, ctx=3719, majf=0, minf=1 00:12:33.902 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:12:33.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:33.902 issued rwts: total=0,3501,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:33.902 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:33.902 job4: (groupid=0, jobs=1): err= 0: pid=67133: Sat Dec 14 06:42:46 2024 00:12:33.902 write: IOPS=342, BW=85.6MiB/s (89.7MB/s)(870MiB/10167msec); 0 zone resets 00:12:33.902 slat (usec): min=20, max=57682, avg=2871.10, stdev=5023.28 00:12:33.902 clat (msec): min=60, max=343, avg=184.02, stdev=16.13 00:12:33.902 lat (msec): min=60, max=343, avg=186.89, stdev=15.57 00:12:33.902 clat percentiles (msec): 00:12:33.902 | 1.00th=[ 128], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 176], 00:12:33.902 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 186], 00:12:33.902 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:12:33.902 | 99.00th=[ 239], 99.50th=[ 296], 99.90th=[ 334], 99.95th=[ 342], 00:12:33.902 | 99.99th=[ 342] 00:12:33.902 bw ( KiB/s): min=79872, max=88064, per=5.63%, avg=87466.40, stdev=1871.54, samples=20 00:12:33.902 iops : min= 312, max= 344, avg=341.65, stdev= 7.31, samples=20 00:12:33.902 lat (msec) : 100=0.57%, 250=98.45%, 500=0.98% 00:12:33.903 cpu : usr=0.59%, sys=0.84%, ctx=3729, majf=0, minf=1 00:12:33.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:12:33.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:33.903 issued rwts: total=0,3480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:33.903 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:33.903 job5: (groupid=0, jobs=1): err= 0: pid=67134: Sat Dec 14 06:42:46 2024 00:12:33.903 write: IOPS=349, BW=87.5MiB/s (91.7MB/s)(889MiB/10160msec); 0 zone resets 00:12:33.903 slat (usec): min=17, max=49182, avg=2778.63, stdev=4917.42 00:12:33.903 clat (msec): min=13, max=345, avg=180.11, stdev=26.50 00:12:33.903 lat (msec): min=13, max=345, avg=182.89, stdev=26.53 00:12:33.903 clat percentiles (msec): 00:12:33.903 | 1.00th=[ 43], 5.00th=[ 155], 10.00th=[ 176], 20.00th=[ 176], 00:12:33.903 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 186], 00:12:33.903 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 188], 00:12:33.903 | 99.00th=[ 241], 99.50th=[ 300], 99.90th=[ 334], 99.95th=[ 347], 00:12:33.903 | 99.99th=[ 347] 00:12:33.903 bw ( KiB/s): min=84480, max=119808, per=5.75%, avg=89369.60, stdev=7237.87, samples=20 00:12:33.903 iops : min= 330, max= 468, avg=349.10, stdev=28.27, samples=20 00:12:33.903 lat (msec) : 20=0.23%, 50=1.07%, 100=1.41%, 250=96.34%, 500=0.96% 00:12:33.903 cpu : usr=0.59%, sys=1.17%, ctx=4569, majf=0, minf=1 00:12:33.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:12:33.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:33.903 issued rwts: total=0,3554,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:33.903 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:33.903 job6: (groupid=0, jobs=1): err= 0: pid=67135: Sat Dec 14 06:42:46 2024 00:12:33.903 write: IOPS=702, BW=176MiB/s (184MB/s)(1769MiB/10077msec); 0 zone resets 00:12:33.903 slat (usec): min=18, max=83145, avg=1408.54, stdev=2598.23 00:12:33.903 clat (msec): min=20, max=207, avg=89.69, stdev=14.44 00:12:33.903 lat (msec): min=20, max=207, avg=91.10, stdev=14.45 00:12:33.903 clat percentiles (msec): 00:12:33.903 | 1.00th=[ 81], 5.00th=[ 82], 10.00th=[ 82], 20.00th=[ 83], 00:12:33.903 | 30.00th=[ 86], 40.00th=[ 87], 50.00th=[ 87], 60.00th=[ 87], 00:12:33.903 | 70.00th=[ 88], 80.00th=[ 89], 90.00th=[ 106], 95.00th=[ 122], 00:12:33.903 | 99.00th=[ 153], 99.50th=[ 178], 99.90th=[ 194], 99.95th=[ 194], 00:12:33.903 | 99.99th=[ 209] 00:12:33.903 bw ( KiB/s): min=108544, max=190976, per=11.56%, avg=179539.35, stdev=22335.18, samples=20 00:12:33.903 iops : min= 424, max= 746, avg=701.30, stdev=87.23, samples=20 00:12:33.903 lat (msec) : 50=0.23%, 100=89.53%, 250=10.24% 00:12:33.903 cpu : usr=0.97%, sys=1.77%, ctx=8366, majf=0, minf=1 00:12:33.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:33.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:33.903 issued rwts: total=0,7077,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:33.903 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:33.903 job7: (groupid=0, jobs=1): err= 0: pid=67136: Sat Dec 14 06:42:46 2024 00:12:33.903 write: IOPS=739, BW=185MiB/s (194MB/s)(1864MiB/10077msec); 0 zone resets 00:12:33.903 slat (usec): min=17, max=8630, avg=1336.02, stdev=2250.54 00:12:33.903 clat (msec): min=8, max=159, avg=85.16, stdev= 6.46 00:12:33.903 lat (msec): min=8, max=159, avg=86.49, stdev= 6.16 00:12:33.903 clat percentiles (msec): 00:12:33.903 | 1.00th=[ 79], 5.00th=[ 81], 10.00th=[ 82], 20.00th=[ 82], 00:12:33.903 | 30.00th=[ 86], 40.00th=[ 86], 50.00th=[ 87], 60.00th=[ 87], 00:12:33.903 | 70.00th=[ 87], 80.00th=[ 87], 90.00th=[ 88], 95.00th=[ 89], 00:12:33.903 | 99.00th=[ 96], 99.50th=[ 112], 99.90th=[ 150], 99.95th=[ 155], 00:12:33.903 | 99.99th=[ 161] 00:12:33.903 bw ( KiB/s): min=180736, max=192512, per=12.17%, avg=189133.20, stdev=2618.38, samples=20 00:12:33.903 iops : min= 706, max= 752, avg=738.80, stdev=10.23, samples=20 00:12:33.903 lat (msec) : 10=0.05%, 20=0.11%, 50=0.43%, 100=98.79%, 250=0.62% 00:12:33.903 cpu : usr=1.33%, sys=2.07%, ctx=9260, majf=0, minf=1 00:12:33.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:33.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:33.903 issued rwts: total=0,7454,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:33.903 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:33.903 job8: (groupid=0, jobs=1): err= 0: pid=67137: Sat Dec 14 06:42:46 2024 00:12:33.903 write: IOPS=736, BW=184MiB/s (193MB/s)(1854MiB/10076msec); 0 zone resets 00:12:33.903 slat (usec): min=17, max=44046, avg=1342.63, stdev=2307.18 00:12:33.903 clat (msec): min=48, max=156, avg=85.57, stdev= 5.03 00:12:33.903 lat (msec): min=48, max=156, avg=86.92, stdev= 4.56 00:12:33.903 clat percentiles (msec): 00:12:33.903 | 1.00th=[ 81], 5.00th=[ 81], 10.00th=[ 82], 20.00th=[ 83], 00:12:33.903 | 30.00th=[ 86], 40.00th=[ 86], 50.00th=[ 87], 60.00th=[ 87], 00:12:33.903 | 70.00th=[ 87], 80.00th=[ 87], 90.00th=[ 88], 95.00th=[ 89], 00:12:33.903 | 99.00th=[ 99], 99.50th=[ 120], 99.90th=[ 146], 99.95th=[ 150], 00:12:33.903 | 99.99th=[ 157] 00:12:33.903 bw ( KiB/s): min=160256, max=190976, per=12.12%, avg=188262.40, stdev=6702.68, samples=20 00:12:33.903 iops : min= 626, max= 746, avg=735.40, stdev=26.18, samples=20 00:12:33.903 lat (msec) : 50=0.05%, 100=99.02%, 250=0.93% 00:12:33.903 cpu : usr=1.31%, sys=2.11%, ctx=9520, majf=0, minf=1 00:12:33.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:33.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:33.903 issued rwts: total=0,7417,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:33.903 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:33.903 job9: (groupid=0, jobs=1): err= 0: pid=67138: Sat Dec 14 06:42:46 2024 00:12:33.903 write: IOPS=714, BW=179MiB/s (187MB/s)(1801MiB/10077msec); 0 zone resets 00:12:33.903 slat (usec): min=15, max=68987, avg=1368.81, stdev=2513.10 00:12:33.903 clat (msec): min=14, max=212, avg=88.14, stdev=16.30 00:12:33.903 lat (msec): min=14, max=216, avg=89.51, stdev=16.37 00:12:33.903 clat percentiles (msec): 00:12:33.903 | 1.00th=[ 48], 5.00th=[ 81], 10.00th=[ 81], 20.00th=[ 82], 00:12:33.903 | 30.00th=[ 85], 40.00th=[ 86], 50.00th=[ 86], 60.00th=[ 87], 00:12:33.903 | 70.00th=[ 87], 80.00th=[ 87], 90.00th=[ 90], 95.00th=[ 120], 00:12:33.903 | 99.00th=[ 153], 99.50th=[ 207], 99.90th=[ 211], 99.95th=[ 213], 00:12:33.903 | 99.99th=[ 213] 00:12:33.903 bw ( KiB/s): min=106709, max=193024, per=11.77%, avg=182794.65, stdev=21680.42, samples=20 00:12:33.903 iops : min= 416, max= 754, avg=714.00, stdev=84.84, samples=20 00:12:33.903 lat (msec) : 20=0.06%, 50=1.01%, 100=89.42%, 250=9.51% 00:12:33.903 cpu : usr=1.02%, sys=1.83%, ctx=6432, majf=0, minf=1 00:12:33.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:12:33.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:33.903 issued rwts: total=0,7203,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:33.903 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:33.903 job10: (groupid=0, jobs=1): err= 0: pid=67139: Sat Dec 14 06:42:46 2024 00:12:33.903 write: IOPS=344, BW=86.2MiB/s (90.4MB/s)(876MiB/10162msec); 0 zone resets 00:12:33.903 slat (usec): min=17, max=20791, avg=2848.82, stdev=4934.64 00:12:33.903 clat (msec): min=16, max=347, avg=182.68, stdev=22.61 00:12:33.903 lat (msec): min=16, max=347, avg=185.53, stdev=22.40 00:12:33.903 clat percentiles (msec): 00:12:33.903 | 1.00th=[ 59], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 176], 00:12:33.903 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 186], 00:12:33.903 | 70.00th=[ 188], 80.00th=[ 188], 90.00th=[ 188], 95.00th=[ 190], 00:12:33.903 | 99.00th=[ 243], 99.50th=[ 300], 99.90th=[ 334], 99.95th=[ 347], 00:12:33.903 | 99.99th=[ 347] 00:12:33.903 bw ( KiB/s): min=84480, max=94208, per=5.67%, avg=88089.60, stdev=1940.59, samples=20 00:12:33.903 iops : min= 330, max= 368, avg=344.10, stdev= 7.58, samples=20 00:12:33.903 lat (msec) : 20=0.11%, 50=0.68%, 100=0.80%, 250=97.43%, 500=0.97% 00:12:33.903 cpu : usr=0.63%, sys=0.87%, ctx=3636, majf=0, minf=1 00:12:33.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:12:33.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:33.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:12:33.903 issued rwts: total=0,3504,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:33.903 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:33.903 00:12:33.903 Run status group 0 (all jobs): 00:12:33.903 WRITE: bw=1517MiB/s (1591MB/s), 85.6MiB/s-196MiB/s (89.7MB/s-205MB/s), io=15.1GiB (16.2GB), run=10076-10167msec 00:12:33.903 00:12:33.903 Disk stats (read/write): 00:12:33.903 nvme0n1: ios=49/14084, merge=0/0, ticks=49/1217564, in_queue=1217613, util=97.91% 00:12:33.903 nvme10n1: ios=49/15620, merge=0/0, ticks=42/1215440, in_queue=1215482, util=97.99% 00:12:33.903 nvme1n1: ios=41/6901, merge=0/0, ticks=35/1209967, in_queue=1210002, util=97.88% 00:12:33.903 nvme2n1: ios=13/6869, merge=0/0, ticks=9/1211425, in_queue=1211434, util=97.99% 00:12:33.903 nvme3n1: ios=23/6819, merge=0/0, ticks=20/1210957, in_queue=1210977, util=98.00% 00:12:33.903 nvme4n1: ios=13/6978, merge=0/0, ticks=67/1210813, in_queue=1210880, util=98.20% 00:12:33.903 nvme5n1: ios=5/14028, merge=0/0, ticks=5/1217195, in_queue=1217200, util=98.42% 00:12:33.903 nvme6n1: ios=0/14758, merge=0/0, ticks=0/1215341, in_queue=1215341, util=98.34% 00:12:33.903 nvme7n1: ios=0/14664, merge=0/0, ticks=0/1214224, in_queue=1214224, util=98.49% 00:12:33.903 nvme8n1: ios=0/14265, merge=0/0, ticks=0/1217373, in_queue=1217373, util=98.80% 00:12:33.903 nvme9n1: ios=0/6879, merge=0/0, ticks=0/1211471, in_queue=1211471, util=98.90% 00:12:33.903 06:42:46 -- target/multiconnection.sh@36 -- # sync 00:12:33.903 06:42:46 -- target/multiconnection.sh@37 -- # seq 1 11 00:12:33.903 06:42:46 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:33.903 06:42:46 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:33.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.903 06:42:46 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:12:33.903 06:42:46 -- common/autotest_common.sh@1208 -- # local i=0 00:12:33.903 06:42:46 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:12:33.904 06:42:46 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:33.904 06:42:46 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:12:33.904 06:42:46 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:33.904 06:42:46 -- common/autotest_common.sh@1220 -- # return 0 00:12:33.904 06:42:46 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.904 06:42:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.904 06:42:46 -- common/autotest_common.sh@10 -- # set +x 00:12:33.904 06:42:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.904 06:42:46 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:33.904 06:42:46 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:12:33.904 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:12:33.904 06:42:46 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:12:33.904 06:42:46 -- common/autotest_common.sh@1208 -- # local i=0 00:12:33.904 06:42:46 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:33.904 06:42:46 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:12:33.904 06:42:46 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:33.904 06:42:46 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:12:33.904 06:42:46 -- common/autotest_common.sh@1220 -- # return 0 00:12:33.904 06:42:46 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:33.904 06:42:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.904 06:42:46 -- common/autotest_common.sh@10 -- # set +x 00:12:33.904 06:42:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.904 06:42:46 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:33.904 06:42:46 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:12:33.904 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:12:33.904 06:42:47 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:12:33.904 06:42:47 -- common/autotest_common.sh@1208 -- # local i=0 00:12:33.904 06:42:47 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:33.904 06:42:47 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:12:33.904 06:42:47 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:33.904 06:42:47 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:12:33.904 06:42:47 -- common/autotest_common.sh@1220 -- # return 0 00:12:33.904 06:42:47 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:33.904 06:42:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.904 06:42:47 -- common/autotest_common.sh@10 -- # set +x 00:12:33.904 06:42:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.904 06:42:47 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:33.904 06:42:47 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:12:33.904 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:12:33.904 06:42:47 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:12:33.904 06:42:47 -- common/autotest_common.sh@1208 -- # local i=0 00:12:33.904 06:42:47 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:33.904 06:42:47 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:12:33.904 06:42:47 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:33.904 06:42:47 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:12:33.904 06:42:47 -- common/autotest_common.sh@1220 -- # return 0 00:12:33.904 06:42:47 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:33.904 06:42:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.904 06:42:47 -- common/autotest_common.sh@10 -- # set +x 00:12:33.904 06:42:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.904 06:42:47 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:33.904 06:42:47 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:12:33.904 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:12:33.904 06:42:47 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:12:33.904 06:42:47 -- common/autotest_common.sh@1208 -- # local i=0 00:12:33.904 06:42:47 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:33.904 06:42:47 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:12:33.904 06:42:47 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:33.904 06:42:47 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:12:33.904 06:42:47 -- common/autotest_common.sh@1220 -- # return 0 00:12:33.904 06:42:47 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:12:33.904 06:42:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.904 06:42:47 -- common/autotest_common.sh@10 -- # set +x 00:12:33.904 06:42:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.904 06:42:47 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:33.904 06:42:47 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:12:33.904 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:12:33.904 06:42:47 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:12:33.904 06:42:47 -- common/autotest_common.sh@1208 -- # local i=0 00:12:33.904 06:42:47 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:33.904 06:42:47 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:12:33.904 06:42:47 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:33.904 06:42:47 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:12:33.904 06:42:47 -- common/autotest_common.sh@1220 -- # return 0 00:12:33.904 06:42:47 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:12:33.904 06:42:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.904 06:42:47 -- common/autotest_common.sh@10 -- # set +x 00:12:33.904 06:42:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.904 06:42:47 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:33.904 06:42:47 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:12:33.904 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:12:33.904 06:42:47 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:12:33.904 06:42:47 -- common/autotest_common.sh@1208 -- # local i=0 00:12:33.904 06:42:47 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:33.904 06:42:47 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:12:33.904 06:42:47 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:33.904 06:42:47 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:12:33.904 06:42:47 -- common/autotest_common.sh@1220 -- # return 0 00:12:33.904 06:42:47 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:12:33.904 06:42:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.904 06:42:47 -- common/autotest_common.sh@10 -- # set +x 00:12:33.904 06:42:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.904 06:42:47 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:33.904 06:42:47 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:12:33.904 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:12:33.904 06:42:47 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:12:33.904 06:42:47 -- common/autotest_common.sh@1208 -- # local i=0 00:12:33.904 06:42:47 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:12:33.904 06:42:47 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:33.904 06:42:47 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:12:33.904 06:42:47 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:33.904 06:42:47 -- common/autotest_common.sh@1220 -- # return 0 00:12:33.904 06:42:47 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:12:33.904 06:42:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.904 06:42:47 -- common/autotest_common.sh@10 -- # set +x 00:12:33.904 06:42:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.904 06:42:47 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:33.904 06:42:47 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:12:33.904 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:12:33.904 06:42:47 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:12:33.904 06:42:47 -- common/autotest_common.sh@1208 -- # local i=0 00:12:33.904 06:42:47 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:33.904 06:42:47 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:12:33.904 06:42:47 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:12:33.904 06:42:47 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:33.904 06:42:47 -- common/autotest_common.sh@1220 -- # return 0 00:12:33.904 06:42:47 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:12:33.904 06:42:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.904 06:42:47 -- common/autotest_common.sh@10 -- # set +x 00:12:33.904 06:42:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.904 06:42:47 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:33.904 06:42:47 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:12:33.904 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:12:33.904 06:42:47 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:12:33.904 06:42:47 -- common/autotest_common.sh@1208 -- # local i=0 00:12:33.904 06:42:47 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:33.904 06:42:47 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:12:33.904 06:42:47 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:12:33.904 06:42:47 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:33.904 06:42:47 -- common/autotest_common.sh@1220 -- # return 0 00:12:33.904 06:42:47 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:12:33.904 06:42:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.904 06:42:47 -- common/autotest_common.sh@10 -- # set +x 00:12:33.904 06:42:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.904 06:42:47 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:33.904 06:42:47 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:12:33.904 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:12:33.904 06:42:47 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:12:33.904 06:42:47 -- common/autotest_common.sh@1208 -- # local i=0 00:12:33.904 06:42:47 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:33.904 06:42:47 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:12:33.904 06:42:47 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:33.904 06:42:47 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:12:33.904 06:42:47 -- common/autotest_common.sh@1220 -- # return 0 00:12:33.904 06:42:47 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:12:33.904 06:42:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.904 06:42:47 -- common/autotest_common.sh@10 -- # set +x 00:12:33.905 06:42:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.905 06:42:47 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:12:33.905 06:42:47 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:12:33.905 06:42:47 -- target/multiconnection.sh@47 -- # nvmftestfini 00:12:33.905 06:42:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:33.905 06:42:47 -- nvmf/common.sh@116 -- # sync 00:12:33.905 06:42:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:33.905 06:42:47 -- nvmf/common.sh@119 -- # set +e 00:12:33.905 06:42:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:33.905 06:42:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:33.905 rmmod nvme_tcp 00:12:33.905 rmmod nvme_fabrics 00:12:34.164 rmmod nvme_keyring 00:12:34.164 06:42:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:34.164 06:42:47 -- nvmf/common.sh@123 -- # set -e 00:12:34.164 06:42:47 -- nvmf/common.sh@124 -- # return 0 00:12:34.164 06:42:47 -- nvmf/common.sh@477 -- # '[' -n 66446 ']' 00:12:34.164 06:42:47 -- nvmf/common.sh@478 -- # killprocess 66446 00:12:34.164 06:42:47 -- common/autotest_common.sh@936 -- # '[' -z 66446 ']' 00:12:34.164 06:42:47 -- common/autotest_common.sh@940 -- # kill -0 66446 00:12:34.164 06:42:47 -- common/autotest_common.sh@941 -- # uname 00:12:34.164 06:42:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:34.164 06:42:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66446 00:12:34.164 killing process with pid 66446 00:12:34.164 06:42:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:34.164 06:42:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:34.164 06:42:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66446' 00:12:34.164 06:42:47 -- common/autotest_common.sh@955 -- # kill 66446 00:12:34.164 06:42:47 -- common/autotest_common.sh@960 -- # wait 66446 00:12:34.423 06:42:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:34.423 06:42:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:34.423 06:42:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:34.423 06:42:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:34.423 06:42:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:34.423 06:42:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.423 06:42:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:34.423 06:42:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.423 06:42:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:34.423 00:12:34.423 real 0m49.183s 00:12:34.423 user 2m41.907s 00:12:34.423 sys 0m33.925s 00:12:34.423 06:42:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:34.423 ************************************ 00:12:34.423 END TEST nvmf_multiconnection 00:12:34.423 06:42:48 -- common/autotest_common.sh@10 -- # set +x 00:12:34.423 ************************************ 00:12:34.423 06:42:48 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:12:34.423 06:42:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:34.423 06:42:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:34.423 06:42:48 -- common/autotest_common.sh@10 -- # set +x 00:12:34.423 ************************************ 00:12:34.423 START TEST nvmf_initiator_timeout 00:12:34.423 ************************************ 00:12:34.423 06:42:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:12:34.682 * Looking for test storage... 00:12:34.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:34.682 06:42:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:34.682 06:42:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:34.682 06:42:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:34.682 06:42:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:34.682 06:42:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:34.682 06:42:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:34.682 06:42:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:34.683 06:42:48 -- scripts/common.sh@335 -- # IFS=.-: 00:12:34.683 06:42:48 -- scripts/common.sh@335 -- # read -ra ver1 00:12:34.683 06:42:48 -- scripts/common.sh@336 -- # IFS=.-: 00:12:34.683 06:42:48 -- scripts/common.sh@336 -- # read -ra ver2 00:12:34.683 06:42:48 -- scripts/common.sh@337 -- # local 'op=<' 00:12:34.683 06:42:48 -- scripts/common.sh@339 -- # ver1_l=2 00:12:34.683 06:42:48 -- scripts/common.sh@340 -- # ver2_l=1 00:12:34.683 06:42:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:34.683 06:42:48 -- scripts/common.sh@343 -- # case "$op" in 00:12:34.683 06:42:48 -- scripts/common.sh@344 -- # : 1 00:12:34.683 06:42:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:34.683 06:42:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:34.683 06:42:48 -- scripts/common.sh@364 -- # decimal 1 00:12:34.683 06:42:48 -- scripts/common.sh@352 -- # local d=1 00:12:34.683 06:42:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:34.683 06:42:48 -- scripts/common.sh@354 -- # echo 1 00:12:34.683 06:42:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:34.683 06:42:48 -- scripts/common.sh@365 -- # decimal 2 00:12:34.683 06:42:48 -- scripts/common.sh@352 -- # local d=2 00:12:34.683 06:42:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:34.683 06:42:48 -- scripts/common.sh@354 -- # echo 2 00:12:34.683 06:42:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:34.683 06:42:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:34.683 06:42:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:34.683 06:42:48 -- scripts/common.sh@367 -- # return 0 00:12:34.683 06:42:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:34.683 06:42:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:34.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.683 --rc genhtml_branch_coverage=1 00:12:34.683 --rc genhtml_function_coverage=1 00:12:34.683 --rc genhtml_legend=1 00:12:34.683 --rc geninfo_all_blocks=1 00:12:34.683 --rc geninfo_unexecuted_blocks=1 00:12:34.683 00:12:34.683 ' 00:12:34.683 06:42:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:34.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.683 --rc genhtml_branch_coverage=1 00:12:34.683 --rc genhtml_function_coverage=1 00:12:34.683 --rc genhtml_legend=1 00:12:34.683 --rc geninfo_all_blocks=1 00:12:34.683 --rc geninfo_unexecuted_blocks=1 00:12:34.683 00:12:34.683 ' 00:12:34.683 06:42:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:34.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.683 --rc genhtml_branch_coverage=1 00:12:34.683 --rc genhtml_function_coverage=1 00:12:34.683 --rc genhtml_legend=1 00:12:34.683 --rc geninfo_all_blocks=1 00:12:34.683 --rc geninfo_unexecuted_blocks=1 00:12:34.683 00:12:34.683 ' 00:12:34.683 06:42:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:34.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.683 --rc genhtml_branch_coverage=1 00:12:34.683 --rc genhtml_function_coverage=1 00:12:34.683 --rc genhtml_legend=1 00:12:34.683 --rc geninfo_all_blocks=1 00:12:34.683 --rc geninfo_unexecuted_blocks=1 00:12:34.683 00:12:34.683 ' 00:12:34.683 06:42:48 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:34.683 06:42:48 -- nvmf/common.sh@7 -- # uname -s 00:12:34.683 06:42:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.683 06:42:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.683 06:42:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.683 06:42:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.683 06:42:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.683 06:42:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.683 06:42:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.683 06:42:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.683 06:42:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.683 06:42:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.683 06:42:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 00:12:34.683 06:42:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=1897a557-42a7-4044-982a-fbab8b2b3e32 00:12:34.683 06:42:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.683 06:42:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.683 06:42:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:34.683 06:42:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:34.683 06:42:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.683 06:42:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.683 06:42:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.683 06:42:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.683 06:42:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.683 06:42:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.683 06:42:48 -- paths/export.sh@5 -- # export PATH 00:12:34.683 06:42:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.683 06:42:48 -- nvmf/common.sh@46 -- # : 0 00:12:34.683 06:42:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:34.683 06:42:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:34.683 06:42:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:34.683 06:42:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.683 06:42:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.683 06:42:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:34.683 06:42:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:34.683 06:42:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:34.683 06:42:48 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:34.683 06:42:48 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:34.683 06:42:48 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:12:34.683 06:42:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:34.683 06:42:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:34.683 06:42:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:34.683 06:42:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:34.683 06:42:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:34.683 06:42:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.683 06:42:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:34.683 06:42:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.683 06:42:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:34.683 06:42:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:34.683 06:42:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:34.683 06:42:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:34.683 06:42:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:34.683 06:42:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:34.683 06:42:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:34.683 06:42:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:34.683 06:42:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:34.683 06:42:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:34.683 06:42:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:34.683 06:42:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:34.683 06:42:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:34.683 06:42:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:34.683 06:42:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:34.683 06:42:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:34.683 06:42:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:34.683 06:42:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:34.683 06:42:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:34.683 06:42:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:34.683 Cannot find device "nvmf_tgt_br" 00:12:34.683 06:42:48 -- nvmf/common.sh@154 -- # true 00:12:34.683 06:42:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:34.683 Cannot find device "nvmf_tgt_br2" 00:12:34.683 06:42:48 -- nvmf/common.sh@155 -- # true 00:12:34.683 06:42:48 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:34.683 06:42:48 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:34.683 Cannot find device "nvmf_tgt_br" 00:12:34.683 06:42:48 -- nvmf/common.sh@157 -- # true 00:12:34.683 06:42:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:34.683 Cannot find device "nvmf_tgt_br2" 00:12:34.683 06:42:48 -- nvmf/common.sh@158 -- # true 00:12:34.683 06:42:48 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:34.942 06:42:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:34.942 06:42:48 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:34.942 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:34.942 06:42:48 -- nvmf/common.sh@161 -- # true 00:12:34.942 06:42:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:34.942 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:34.942 06:42:48 -- nvmf/common.sh@162 -- # true 00:12:34.942 06:42:48 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:34.942 06:42:48 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:34.942 06:42:48 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:34.942 06:42:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:34.942 06:42:48 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:34.942 06:42:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:34.942 06:42:48 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:34.942 06:42:48 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:34.942 06:42:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:34.942 06:42:48 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:34.942 06:42:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:34.942 06:42:48 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:34.942 06:42:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:34.942 06:42:48 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:34.942 06:42:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:34.942 06:42:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:34.942 06:42:48 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:34.942 06:42:48 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:34.942 06:42:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:34.942 06:42:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:34.942 06:42:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:34.942 06:42:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:34.942 06:42:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:34.942 06:42:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:34.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:34.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:12:34.942 00:12:34.942 --- 10.0.0.2 ping statistics --- 00:12:34.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.942 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:12:34.942 06:42:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:34.942 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:34.942 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:12:34.942 00:12:34.942 --- 10.0.0.3 ping statistics --- 00:12:34.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.943 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:12:34.943 06:42:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:34.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:34.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:12:34.943 00:12:34.943 --- 10.0.0.1 ping statistics --- 00:12:34.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.943 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:12:34.943 06:42:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:34.943 06:42:48 -- nvmf/common.sh@421 -- # return 0 00:12:34.943 06:42:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:34.943 06:42:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.201 06:42:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:35.201 06:42:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:35.201 06:42:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.201 06:42:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:35.201 06:42:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:35.201 06:42:48 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:12:35.201 06:42:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:35.201 06:42:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:35.201 06:42:48 -- common/autotest_common.sh@10 -- # set +x 00:12:35.201 06:42:48 -- nvmf/common.sh@469 -- # nvmfpid=67516 00:12:35.201 06:42:48 -- nvmf/common.sh@470 -- # waitforlisten 67516 00:12:35.201 06:42:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:35.201 06:42:48 -- common/autotest_common.sh@829 -- # '[' -z 67516 ']' 00:12:35.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.201 06:42:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.201 06:42:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:35.201 06:42:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.201 06:42:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:35.201 06:42:48 -- common/autotest_common.sh@10 -- # set +x 00:12:35.201 [2024-12-14 06:42:49.005429] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:35.201 [2024-12-14 06:42:49.005532] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.201 [2024-12-14 06:42:49.139868] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:35.460 [2024-12-14 06:42:49.194583] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:35.460 [2024-12-14 06:42:49.195112] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.460 [2024-12-14 06:42:49.195255] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.460 [2024-12-14 06:42:49.195469] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.460 [2024-12-14 06:42:49.195773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.460 [2024-12-14 06:42:49.195873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:35.460 [2024-12-14 06:42:49.195970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:35.460 [2024-12-14 06:42:49.195979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.026 06:42:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:36.026 06:42:49 -- common/autotest_common.sh@862 -- # return 0 00:12:36.026 06:42:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:36.026 06:42:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:36.026 06:42:49 -- common/autotest_common.sh@10 -- # set +x 00:12:36.026 06:42:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.026 06:42:50 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:36.026 06:42:50 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:36.026 06:42:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.026 06:42:50 -- common/autotest_common.sh@10 -- # set +x 00:12:36.284 Malloc0 00:12:36.284 06:42:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.284 06:42:50 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:12:36.284 06:42:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.284 06:42:50 -- common/autotest_common.sh@10 -- # set +x 00:12:36.284 Delay0 00:12:36.284 06:42:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.284 06:42:50 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:36.284 06:42:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.284 06:42:50 -- common/autotest_common.sh@10 -- # set +x 00:12:36.284 [2024-12-14 06:42:50.059443] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:36.284 06:42:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.284 06:42:50 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:36.284 06:42:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.284 06:42:50 -- common/autotest_common.sh@10 -- # set +x 00:12:36.284 06:42:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.284 06:42:50 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:36.284 06:42:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.284 06:42:50 -- common/autotest_common.sh@10 -- # set +x 00:12:36.284 06:42:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.284 06:42:50 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.284 06:42:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.284 06:42:50 -- common/autotest_common.sh@10 -- # set +x 00:12:36.284 [2024-12-14 06:42:50.087600] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.284 06:42:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.284 06:42:50 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 --hostid=1897a557-42a7-4044-982a-fbab8b2b3e32 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:36.284 06:42:50 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:12:36.284 06:42:50 -- common/autotest_common.sh@1187 -- # local i=0 00:12:36.284 06:42:50 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:36.284 06:42:50 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:36.284 06:42:50 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:38.813 06:42:52 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:38.813 06:42:52 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:38.813 06:42:52 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:38.813 06:42:52 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:38.813 06:42:52 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:38.813 06:42:52 -- common/autotest_common.sh@1197 -- # return 0 00:12:38.813 06:42:52 -- target/initiator_timeout.sh@35 -- # fio_pid=67586 00:12:38.813 06:42:52 -- target/initiator_timeout.sh@37 -- # sleep 3 00:12:38.813 06:42:52 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:12:38.813 [global] 00:12:38.813 thread=1 00:12:38.813 invalidate=1 00:12:38.813 rw=write 00:12:38.813 time_based=1 00:12:38.813 runtime=60 00:12:38.813 ioengine=libaio 00:12:38.813 direct=1 00:12:38.813 bs=4096 00:12:38.813 iodepth=1 00:12:38.813 norandommap=0 00:12:38.813 numjobs=1 00:12:38.813 00:12:38.813 verify_dump=1 00:12:38.813 verify_backlog=512 00:12:38.813 verify_state_save=0 00:12:38.813 do_verify=1 00:12:38.813 verify=crc32c-intel 00:12:38.813 [job0] 00:12:38.813 filename=/dev/nvme0n1 00:12:38.813 Could not set queue depth (nvme0n1) 00:12:38.813 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:38.813 fio-3.35 00:12:38.813 Starting 1 thread 00:12:41.358 06:42:55 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:12:41.358 06:42:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.358 06:42:55 -- common/autotest_common.sh@10 -- # set +x 00:12:41.358 true 00:12:41.358 06:42:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.358 06:42:55 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:12:41.358 06:42:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.358 06:42:55 -- common/autotest_common.sh@10 -- # set +x 00:12:41.358 true 00:12:41.358 06:42:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.358 06:42:55 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:12:41.358 06:42:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.358 06:42:55 -- common/autotest_common.sh@10 -- # set +x 00:12:41.358 true 00:12:41.358 06:42:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.358 06:42:55 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:12:41.358 06:42:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.358 06:42:55 -- common/autotest_common.sh@10 -- # set +x 00:12:41.358 true 00:12:41.358 06:42:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.358 06:42:55 -- target/initiator_timeout.sh@45 -- # sleep 3 00:12:44.643 06:42:58 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:12:44.643 06:42:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.643 06:42:58 -- common/autotest_common.sh@10 -- # set +x 00:12:44.643 true 00:12:44.643 06:42:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.643 06:42:58 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:12:44.643 06:42:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.643 06:42:58 -- common/autotest_common.sh@10 -- # set +x 00:12:44.643 true 00:12:44.643 06:42:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.643 06:42:58 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:12:44.643 06:42:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.643 06:42:58 -- common/autotest_common.sh@10 -- # set +x 00:12:44.643 true 00:12:44.643 06:42:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.643 06:42:58 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:12:44.643 06:42:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.643 06:42:58 -- common/autotest_common.sh@10 -- # set +x 00:12:44.643 true 00:12:44.643 06:42:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.643 06:42:58 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:12:44.643 06:42:58 -- target/initiator_timeout.sh@54 -- # wait 67586 00:13:40.862 00:13:40.862 job0: (groupid=0, jobs=1): err= 0: pid=67607: Sat Dec 14 06:43:52 2024 00:13:40.862 read: IOPS=819, BW=3277KiB/s (3355kB/s)(192MiB/60000msec) 00:13:40.862 slat (usec): min=9, max=1489, avg=13.21, stdev= 8.03 00:13:40.862 clat (usec): min=3, max=2055, avg=200.74, stdev=28.94 00:13:40.862 lat (usec): min=166, max=2078, avg=213.95, stdev=30.67 00:13:40.862 clat percentiles (usec): 00:13:40.862 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 184], 00:13:40.862 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 202], 00:13:40.862 | 70.00th=[ 208], 80.00th=[ 217], 90.00th=[ 229], 95.00th=[ 239], 00:13:40.862 | 99.00th=[ 265], 99.50th=[ 293], 99.90th=[ 537], 99.95th=[ 594], 00:13:40.862 | 99.99th=[ 881] 00:13:40.862 write: IOPS=823, BW=3294KiB/s (3373kB/s)(193MiB/60000msec); 0 zone resets 00:13:40.862 slat (usec): min=11, max=16822, avg=19.77, stdev=81.02 00:13:40.862 clat (usec): min=112, max=40666k, avg=978.76, stdev=182943.79 00:13:40.862 lat (usec): min=131, max=40666k, avg=998.54, stdev=182943.88 00:13:40.862 clat percentiles (usec): 00:13:40.862 | 1.00th=[ 123], 5.00th=[ 128], 10.00th=[ 133], 20.00th=[ 139], 00:13:40.862 | 30.00th=[ 143], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 157], 00:13:40.862 | 70.00th=[ 163], 80.00th=[ 172], 90.00th=[ 182], 95.00th=[ 192], 00:13:40.862 | 99.00th=[ 217], 99.50th=[ 243], 99.90th=[ 498], 99.95th=[ 603], 00:13:40.862 | 99.99th=[ 1303] 00:13:40.862 bw ( KiB/s): min= 2440, max=12288, per=100.00%, avg=9882.67, stdev=2105.51, samples=39 00:13:40.862 iops : min= 610, max= 3072, avg=2470.67, stdev=526.38, samples=39 00:13:40.862 lat (usec) : 4=0.01%, 250=98.62%, 500=1.26%, 750=0.09%, 1000=0.01% 00:13:40.862 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:13:40.862 cpu : usr=0.56%, sys=2.09%, ctx=98580, majf=0, minf=5 00:13:40.862 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:40.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.862 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.862 issued rwts: total=49152,49410,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.862 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:40.862 00:13:40.862 Run status group 0 (all jobs): 00:13:40.862 READ: bw=3277KiB/s (3355kB/s), 3277KiB/s-3277KiB/s (3355kB/s-3355kB/s), io=192MiB (201MB), run=60000-60000msec 00:13:40.862 WRITE: bw=3294KiB/s (3373kB/s), 3294KiB/s-3294KiB/s (3373kB/s-3373kB/s), io=193MiB (202MB), run=60000-60000msec 00:13:40.862 00:13:40.862 Disk stats (read/write): 00:13:40.862 nvme0n1: ios=49213/49152, merge=0/0, ticks=10262/8092, in_queue=18354, util=99.88% 00:13:40.862 06:43:52 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:40.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.862 06:43:52 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:40.862 06:43:52 -- common/autotest_common.sh@1208 -- # local i=0 00:13:40.862 06:43:52 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:40.862 06:43:52 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:40.862 06:43:52 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:40.862 06:43:52 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:40.862 06:43:52 -- common/autotest_common.sh@1220 -- # return 0 00:13:40.862 06:43:52 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:13:40.862 06:43:52 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:13:40.862 nvmf hotplug test: fio successful as expected 00:13:40.862 06:43:52 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:40.862 06:43:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.862 06:43:52 -- common/autotest_common.sh@10 -- # set +x 00:13:40.862 06:43:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.862 06:43:52 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:13:40.862 06:43:52 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:13:40.863 06:43:52 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:13:40.863 06:43:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:40.863 06:43:52 -- nvmf/common.sh@116 -- # sync 00:13:40.863 06:43:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:40.863 06:43:52 -- nvmf/common.sh@119 -- # set +e 00:13:40.863 06:43:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:40.863 06:43:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:40.863 rmmod nvme_tcp 00:13:40.863 rmmod nvme_fabrics 00:13:40.863 rmmod nvme_keyring 00:13:40.863 06:43:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:40.863 06:43:52 -- nvmf/common.sh@123 -- # set -e 00:13:40.863 06:43:52 -- nvmf/common.sh@124 -- # return 0 00:13:40.863 06:43:52 -- nvmf/common.sh@477 -- # '[' -n 67516 ']' 00:13:40.863 06:43:52 -- nvmf/common.sh@478 -- # killprocess 67516 00:13:40.863 06:43:52 -- common/autotest_common.sh@936 -- # '[' -z 67516 ']' 00:13:40.863 06:43:52 -- common/autotest_common.sh@940 -- # kill -0 67516 00:13:40.863 06:43:52 -- common/autotest_common.sh@941 -- # uname 00:13:40.863 06:43:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:40.863 06:43:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67516 00:13:40.863 killing process with pid 67516 00:13:40.863 06:43:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:40.863 06:43:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:40.863 06:43:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67516' 00:13:40.863 06:43:52 -- common/autotest_common.sh@955 -- # kill 67516 00:13:40.863 06:43:52 -- common/autotest_common.sh@960 -- # wait 67516 00:13:40.863 06:43:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:40.863 06:43:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:40.863 06:43:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:40.863 06:43:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:40.863 06:43:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:40.863 06:43:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.863 06:43:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:40.863 06:43:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.863 06:43:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:40.863 00:13:40.863 real 1m4.591s 00:13:40.863 user 3m53.018s 00:13:40.863 sys 0m21.897s 00:13:40.863 ************************************ 00:13:40.863 END TEST nvmf_initiator_timeout 00:13:40.863 ************************************ 00:13:40.863 06:43:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:40.863 06:43:52 -- common/autotest_common.sh@10 -- # set +x 00:13:40.863 06:43:53 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:13:40.863 06:43:53 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:13:40.863 06:43:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:40.863 06:43:53 -- common/autotest_common.sh@10 -- # set +x 00:13:40.863 06:43:53 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:13:40.863 06:43:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:40.863 06:43:53 -- common/autotest_common.sh@10 -- # set +x 00:13:40.863 06:43:53 -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:13:40.863 06:43:53 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:13:40.863 06:43:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:40.863 06:43:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:40.863 06:43:53 -- common/autotest_common.sh@10 -- # set +x 00:13:40.863 ************************************ 00:13:40.863 START TEST nvmf_identify 00:13:40.863 ************************************ 00:13:40.863 06:43:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:13:40.863 * Looking for test storage... 00:13:40.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:40.863 06:43:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:40.863 06:43:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:40.863 06:43:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:40.863 06:43:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:40.863 06:43:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:40.863 06:43:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:40.863 06:43:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:40.863 06:43:53 -- scripts/common.sh@335 -- # IFS=.-: 00:13:40.863 06:43:53 -- scripts/common.sh@335 -- # read -ra ver1 00:13:40.863 06:43:53 -- scripts/common.sh@336 -- # IFS=.-: 00:13:40.863 06:43:53 -- scripts/common.sh@336 -- # read -ra ver2 00:13:40.863 06:43:53 -- scripts/common.sh@337 -- # local 'op=<' 00:13:40.863 06:43:53 -- scripts/common.sh@339 -- # ver1_l=2 00:13:40.863 06:43:53 -- scripts/common.sh@340 -- # ver2_l=1 00:13:40.863 06:43:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:40.863 06:43:53 -- scripts/common.sh@343 -- # case "$op" in 00:13:40.863 06:43:53 -- scripts/common.sh@344 -- # : 1 00:13:40.863 06:43:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:40.863 06:43:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:40.863 06:43:53 -- scripts/common.sh@364 -- # decimal 1 00:13:40.863 06:43:53 -- scripts/common.sh@352 -- # local d=1 00:13:40.863 06:43:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:40.863 06:43:53 -- scripts/common.sh@354 -- # echo 1 00:13:40.863 06:43:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:40.863 06:43:53 -- scripts/common.sh@365 -- # decimal 2 00:13:40.863 06:43:53 -- scripts/common.sh@352 -- # local d=2 00:13:40.863 06:43:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:40.863 06:43:53 -- scripts/common.sh@354 -- # echo 2 00:13:40.863 06:43:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:40.863 06:43:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:40.863 06:43:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:40.863 06:43:53 -- scripts/common.sh@367 -- # return 0 00:13:40.863 06:43:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:40.863 06:43:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:40.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.863 --rc genhtml_branch_coverage=1 00:13:40.863 --rc genhtml_function_coverage=1 00:13:40.863 --rc genhtml_legend=1 00:13:40.863 --rc geninfo_all_blocks=1 00:13:40.863 --rc geninfo_unexecuted_blocks=1 00:13:40.863 00:13:40.863 ' 00:13:40.863 06:43:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:40.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.863 --rc genhtml_branch_coverage=1 00:13:40.863 --rc genhtml_function_coverage=1 00:13:40.863 --rc genhtml_legend=1 00:13:40.863 --rc geninfo_all_blocks=1 00:13:40.863 --rc geninfo_unexecuted_blocks=1 00:13:40.863 00:13:40.863 ' 00:13:40.863 06:43:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:40.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.863 --rc genhtml_branch_coverage=1 00:13:40.863 --rc genhtml_function_coverage=1 00:13:40.863 --rc genhtml_legend=1 00:13:40.863 --rc geninfo_all_blocks=1 00:13:40.863 --rc geninfo_unexecuted_blocks=1 00:13:40.863 00:13:40.863 ' 00:13:40.863 06:43:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:40.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.863 --rc genhtml_branch_coverage=1 00:13:40.863 --rc genhtml_function_coverage=1 00:13:40.863 --rc genhtml_legend=1 00:13:40.863 --rc geninfo_all_blocks=1 00:13:40.863 --rc geninfo_unexecuted_blocks=1 00:13:40.863 00:13:40.863 ' 00:13:40.863 06:43:53 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:40.863 06:43:53 -- nvmf/common.sh@7 -- # uname -s 00:13:40.863 06:43:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:40.863 06:43:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:40.863 06:43:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:40.863 06:43:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:40.863 06:43:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:40.863 06:43:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:40.863 06:43:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:40.863 06:43:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:40.863 06:43:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:40.863 06:43:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:40.863 06:43:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 00:13:40.863 06:43:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=1897a557-42a7-4044-982a-fbab8b2b3e32 00:13:40.863 06:43:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:40.863 06:43:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:40.863 06:43:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:40.863 06:43:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:40.863 06:43:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:40.863 06:43:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.863 06:43:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.863 06:43:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.863 06:43:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.863 06:43:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.863 06:43:53 -- paths/export.sh@5 -- # export PATH 00:13:40.863 06:43:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.863 06:43:53 -- nvmf/common.sh@46 -- # : 0 00:13:40.863 06:43:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:40.864 06:43:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:40.864 06:43:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:40.864 06:43:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:40.864 06:43:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:40.864 06:43:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:40.864 06:43:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:40.864 06:43:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:40.864 06:43:53 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:40.864 06:43:53 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:40.864 06:43:53 -- host/identify.sh@14 -- # nvmftestinit 00:13:40.864 06:43:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:40.864 06:43:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:40.864 06:43:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:40.864 06:43:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:40.864 06:43:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:40.864 06:43:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.864 06:43:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:40.864 06:43:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.864 06:43:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:40.864 06:43:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:40.864 06:43:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:40.864 06:43:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:40.864 06:43:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:40.864 06:43:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:40.864 06:43:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:40.864 06:43:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:40.864 06:43:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:40.864 06:43:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:40.864 06:43:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:40.864 06:43:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:40.864 06:43:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:40.864 06:43:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:40.864 06:43:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:40.864 06:43:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:40.864 06:43:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:40.864 06:43:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:40.864 06:43:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:40.864 06:43:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:40.864 Cannot find device "nvmf_tgt_br" 00:13:40.864 06:43:53 -- nvmf/common.sh@154 -- # true 00:13:40.864 06:43:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:40.864 Cannot find device "nvmf_tgt_br2" 00:13:40.864 06:43:53 -- nvmf/common.sh@155 -- # true 00:13:40.864 06:43:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:40.864 06:43:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:40.864 Cannot find device "nvmf_tgt_br" 00:13:40.864 06:43:53 -- nvmf/common.sh@157 -- # true 00:13:40.864 06:43:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:40.864 Cannot find device "nvmf_tgt_br2" 00:13:40.864 06:43:53 -- nvmf/common.sh@158 -- # true 00:13:40.864 06:43:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:40.864 06:43:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:40.864 06:43:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:40.864 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:40.864 06:43:53 -- nvmf/common.sh@161 -- # true 00:13:40.864 06:43:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:40.864 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:40.864 06:43:53 -- nvmf/common.sh@162 -- # true 00:13:40.864 06:43:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:40.864 06:43:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:40.864 06:43:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:40.864 06:43:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:40.864 06:43:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:40.864 06:43:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:40.864 06:43:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:40.864 06:43:53 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:40.864 06:43:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:40.864 06:43:53 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:40.864 06:43:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:40.864 06:43:53 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:40.864 06:43:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:40.864 06:43:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:40.864 06:43:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:40.864 06:43:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:40.864 06:43:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:40.864 06:43:53 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:40.864 06:43:53 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:40.864 06:43:53 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:40.864 06:43:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:40.864 06:43:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:40.864 06:43:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:40.864 06:43:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:40.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:40.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:13:40.864 00:13:40.864 --- 10.0.0.2 ping statistics --- 00:13:40.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.864 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:13:40.864 06:43:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:40.864 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:40.864 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:13:40.864 00:13:40.864 --- 10.0.0.3 ping statistics --- 00:13:40.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.864 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:13:40.864 06:43:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:40.864 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:40.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:13:40.864 00:13:40.864 --- 10.0.0.1 ping statistics --- 00:13:40.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.864 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:13:40.864 06:43:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:40.864 06:43:53 -- nvmf/common.sh@421 -- # return 0 00:13:40.864 06:43:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:40.864 06:43:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:40.864 06:43:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:40.864 06:43:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:40.864 06:43:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:40.864 06:43:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:40.864 06:43:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:40.864 06:43:53 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:13:40.864 06:43:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:40.864 06:43:53 -- common/autotest_common.sh@10 -- # set +x 00:13:40.864 06:43:53 -- host/identify.sh@19 -- # nvmfpid=68462 00:13:40.864 06:43:53 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:40.864 06:43:53 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:40.864 06:43:53 -- host/identify.sh@23 -- # waitforlisten 68462 00:13:40.864 06:43:53 -- common/autotest_common.sh@829 -- # '[' -z 68462 ']' 00:13:40.864 06:43:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.864 06:43:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:40.864 06:43:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.864 06:43:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:40.864 06:43:53 -- common/autotest_common.sh@10 -- # set +x 00:13:40.864 [2024-12-14 06:43:53.716313] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:40.864 [2024-12-14 06:43:53.716812] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.864 [2024-12-14 06:43:53.851709] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:40.864 [2024-12-14 06:43:53.904043] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:40.864 [2024-12-14 06:43:53.904444] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.864 [2024-12-14 06:43:53.904503] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.864 [2024-12-14 06:43:53.904628] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.864 [2024-12-14 06:43:53.904733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.864 [2024-12-14 06:43:53.905369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.864 [2024-12-14 06:43:53.905537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:40.864 [2024-12-14 06:43:53.905680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.864 06:43:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:40.864 06:43:54 -- common/autotest_common.sh@862 -- # return 0 00:13:40.864 06:43:54 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:40.864 06:43:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.864 06:43:54 -- common/autotest_common.sh@10 -- # set +x 00:13:40.864 [2024-12-14 06:43:54.754743] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:40.864 06:43:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.864 06:43:54 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:13:40.864 06:43:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:40.864 06:43:54 -- common/autotest_common.sh@10 -- # set +x 00:13:40.864 06:43:54 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:40.864 06:43:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.864 06:43:54 -- common/autotest_common.sh@10 -- # set +x 00:13:40.864 Malloc0 00:13:40.864 06:43:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.864 06:43:54 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:40.864 06:43:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.864 06:43:54 -- common/autotest_common.sh@10 -- # set +x 00:13:40.865 06:43:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.865 06:43:54 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:13:40.865 06:43:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.865 06:43:54 -- common/autotest_common.sh@10 -- # set +x 00:13:40.865 06:43:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.865 06:43:54 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.865 06:43:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.865 06:43:54 -- common/autotest_common.sh@10 -- # set +x 00:13:40.865 [2024-12-14 06:43:54.850436] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.138 06:43:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.138 06:43:54 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:41.138 06:43:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.138 06:43:54 -- common/autotest_common.sh@10 -- # set +x 00:13:41.138 06:43:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.138 06:43:54 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:13:41.138 06:43:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.138 06:43:54 -- common/autotest_common.sh@10 -- # set +x 00:13:41.138 [2024-12-14 06:43:54.870178] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:13:41.138 [ 00:13:41.138 { 00:13:41.138 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:41.138 "subtype": "Discovery", 00:13:41.138 "listen_addresses": [ 00:13:41.138 { 00:13:41.138 "transport": "TCP", 00:13:41.138 "trtype": "TCP", 00:13:41.138 "adrfam": "IPv4", 00:13:41.138 "traddr": "10.0.0.2", 00:13:41.138 "trsvcid": "4420" 00:13:41.138 } 00:13:41.138 ], 00:13:41.138 "allow_any_host": true, 00:13:41.138 "hosts": [] 00:13:41.138 }, 00:13:41.138 { 00:13:41.138 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:41.138 "subtype": "NVMe", 00:13:41.138 "listen_addresses": [ 00:13:41.138 { 00:13:41.138 "transport": "TCP", 00:13:41.138 "trtype": "TCP", 00:13:41.138 "adrfam": "IPv4", 00:13:41.138 "traddr": "10.0.0.2", 00:13:41.138 "trsvcid": "4420" 00:13:41.138 } 00:13:41.138 ], 00:13:41.138 "allow_any_host": true, 00:13:41.138 "hosts": [], 00:13:41.138 "serial_number": "SPDK00000000000001", 00:13:41.138 "model_number": "SPDK bdev Controller", 00:13:41.138 "max_namespaces": 32, 00:13:41.138 "min_cntlid": 1, 00:13:41.138 "max_cntlid": 65519, 00:13:41.138 "namespaces": [ 00:13:41.138 { 00:13:41.138 "nsid": 1, 00:13:41.138 "bdev_name": "Malloc0", 00:13:41.138 "name": "Malloc0", 00:13:41.138 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:13:41.138 "eui64": "ABCDEF0123456789", 00:13:41.138 "uuid": "94beaa69-b065-4a59-92a9-242396233679" 00:13:41.138 } 00:13:41.138 ] 00:13:41.138 } 00:13:41.138 ] 00:13:41.138 06:43:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.138 06:43:54 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:13:41.138 [2024-12-14 06:43:54.906173] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:41.138 [2024-12-14 06:43:54.906233] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68497 ] 00:13:41.138 [2024-12-14 06:43:55.040438] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:13:41.138 [2024-12-14 06:43:55.040512] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:13:41.138 [2024-12-14 06:43:55.040519] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:13:41.138 [2024-12-14 06:43:55.040530] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:13:41.138 [2024-12-14 06:43:55.040540] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:13:41.138 [2024-12-14 06:43:55.040685] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:13:41.138 [2024-12-14 06:43:55.040771] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xd21d30 0 00:13:41.138 [2024-12-14 06:43:55.053902] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:13:41.138 [2024-12-14 06:43:55.053925] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:13:41.138 [2024-12-14 06:43:55.053947] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:13:41.138 [2024-12-14 06:43:55.053951] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:13:41.138 [2024-12-14 06:43:55.053992] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.138 [2024-12-14 06:43:55.053999] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.138 [2024-12-14 06:43:55.054003] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd21d30) 00:13:41.138 [2024-12-14 06:43:55.054016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:41.138 [2024-12-14 06:43:55.054046] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7ff30, cid 0, qid 0 00:13:41.138 [2024-12-14 06:43:55.061917] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.138 [2024-12-14 06:43:55.061936] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.139 [2024-12-14 06:43:55.061940] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.139 [2024-12-14 06:43:55.061945] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd7ff30) on tqpair=0xd21d30 00:13:41.139 [2024-12-14 06:43:55.061956] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:13:41.139 [2024-12-14 06:43:55.061964] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:13:41.139 [2024-12-14 06:43:55.061970] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:13:41.139 [2024-12-14 06:43:55.061986] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.139 [2024-12-14 06:43:55.061991] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.139 [2024-12-14 06:43:55.061995] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd21d30) 00:13:41.139 [2024-12-14 06:43:55.062004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.139 [2024-12-14 06:43:55.062040] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7ff30, cid 0, qid 0 00:13:41.139 [2024-12-14 06:43:55.062098] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.139 [2024-12-14 06:43:55.062105] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.139 [2024-12-14 06:43:55.062108] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.139 [2024-12-14 06:43:55.062112] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd7ff30) on tqpair=0xd21d30 00:13:41.139 [2024-12-14 06:43:55.062118] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:13:41.139 [2024-12-14 06:43:55.062125] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:13:41.139 [2024-12-14 06:43:55.062132] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.139 [2024-12-14 06:43:55.062136] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.139 [2024-12-14 06:43:55.062140] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd21d30) 00:13:41.139 [2024-12-14 06:43:55.062147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.139 [2024-12-14 06:43:55.062181] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7ff30, cid 0, qid 0 00:13:41.139 [2024-12-14 06:43:55.062241] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.139 [2024-12-14 06:43:55.062247] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.139 [2024-12-14 06:43:55.062251] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.139 [2024-12-14 06:43:55.062255] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd7ff30) on tqpair=0xd21d30 00:13:41.139 [2024-12-14 06:43:55.062261] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:13:41.139 [2024-12-14 06:43:55.062269] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:13:41.139 [2024-12-14 06:43:55.062276] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.139 [2024-12-14 06:43:55.062280] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.139 [2024-12-14 06:43:55.062284] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd21d30) 00:13:41.139 [2024-12-14 06:43:55.062291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.139 [2024-12-14 06:43:55.062307] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7ff30, cid 0, qid 0 00:13:41.139 [2024-12-14 06:43:55.062390] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.139 [2024-12-14 06:43:55.062397] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.139 [2024-12-14 06:43:55.062400] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.139 [2024-12-14 06:43:55.062404] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd7ff30) on tqpair=0xd21d30 00:13:41.139 [2024-12-14 06:43:55.062410] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:41.139 [2024-12-14 06:43:55.062420] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.139 [2024-12-14 06:43:55.062425] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.139 [2024-12-14 06:43:55.062429] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd21d30) 00:13:41.139 [2024-12-14 06:43:55.062436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.139 [2024-12-14 06:43:55.062452] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7ff30, cid 0, qid 0 00:13:41.139 [2024-12-14 06:43:55.062517] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.139 [2024-12-14 06:43:55.062524] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.139 [2024-12-14 06:43:55.062528] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.139 [2024-12-14 06:43:55.062532] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd7ff30) on tqpair=0xd21d30 00:13:41.139 [2024-12-14 06:43:55.062538] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:13:41.139 [2024-12-14 06:43:55.062543] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:13:41.139 [2024-12-14 06:43:55.062551] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:41.139 [2024-12-14 06:43:55.062656] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:13:41.139 [2024-12-14 06:43:55.062662] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:41.139 [2024-12-14 06:43:55.062671] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.139 [2024-12-14 06:43:55.062675] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.139 [2024-12-14 06:43:55.062679] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd21d30) 00:13:41.139 [2024-12-14 06:43:55.062686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.139 [2024-12-14 06:43:55.062730] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7ff30, cid 0, qid 0 00:13:41.139 [2024-12-14 06:43:55.062799] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.139 [2024-12-14 06:43:55.062807] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.139 [2024-12-14 06:43:55.062811] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.139 [2024-12-14 06:43:55.062816] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd7ff30) on tqpair=0xd21d30 00:13:41.139 [2024-12-14 06:43:55.062822] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:41.139 [2024-12-14 06:43:55.062832] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.139 [2024-12-14 06:43:55.062837] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.139 [2024-12-14 06:43:55.062841] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd21d30) 00:13:41.139 [2024-12-14 06:43:55.062849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.139 [2024-12-14 06:43:55.062866] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7ff30, cid 0, qid 0 00:13:41.139 [2024-12-14 06:43:55.063002] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.139 [2024-12-14 06:43:55.063014] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.139 [2024-12-14 06:43:55.063017] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.139 [2024-12-14 06:43:55.063022] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd7ff30) on tqpair=0xd21d30 00:13:41.139 [2024-12-14 06:43:55.063027] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:41.139 [2024-12-14 06:43:55.063033] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:13:41.140 [2024-12-14 06:43:55.063042] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:13:41.140 [2024-12-14 06:43:55.063059] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:13:41.140 [2024-12-14 06:43:55.063070] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.140 [2024-12-14 06:43:55.063074] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.140 [2024-12-14 06:43:55.063079] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd21d30) 00:13:41.140 [2024-12-14 06:43:55.063087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.140 [2024-12-14 06:43:55.063110] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7ff30, cid 0, qid 0 00:13:41.140 [2024-12-14 06:43:55.063223] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:41.140 [2024-12-14 06:43:55.063234] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:41.140 [2024-12-14 06:43:55.063239] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:41.140 [2024-12-14 06:43:55.063243] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd21d30): datao=0, datal=4096, cccid=0 00:13:41.140 [2024-12-14 06:43:55.063248] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd7ff30) on tqpair(0xd21d30): expected_datao=0, payload_size=4096 00:13:41.140 [2024-12-14 06:43:55.063257] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:41.140 [2024-12-14 06:43:55.063262] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:41.140 [2024-12-14 06:43:55.063272] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.140 [2024-12-14 06:43:55.063278] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.140 [2024-12-14 06:43:55.063282] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.140 [2024-12-14 06:43:55.063286] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd7ff30) on tqpair=0xd21d30 00:13:41.140 [2024-12-14 06:43:55.063295] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:13:41.140 [2024-12-14 06:43:55.063300] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:13:41.140 [2024-12-14 06:43:55.063305] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:13:41.140 [2024-12-14 06:43:55.063311] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:13:41.140 [2024-12-14 06:43:55.063331] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:13:41.140 [2024-12-14 06:43:55.063337] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:13:41.140 [2024-12-14 06:43:55.063368] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:13:41.140 [2024-12-14 06:43:55.063377] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.140 [2024-12-14 06:43:55.063381] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.140 [2024-12-14 06:43:55.063386] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd21d30) 00:13:41.140 [2024-12-14 06:43:55.063394] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:41.140 [2024-12-14 06:43:55.063416] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7ff30, cid 0, qid 0 00:13:41.140 [2024-12-14 06:43:55.063491] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.140 [2024-12-14 06:43:55.063499] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.140 [2024-12-14 06:43:55.063503] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.140 [2024-12-14 06:43:55.063507] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd7ff30) on tqpair=0xd21d30 00:13:41.140 [2024-12-14 06:43:55.063532] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.140 [2024-12-14 06:43:55.063542] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.140 [2024-12-14 06:43:55.063546] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd21d30) 00:13:41.140 [2024-12-14 06:43:55.063555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.140 [2024-12-14 06:43:55.063562] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.140 [2024-12-14 06:43:55.063566] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.140 [2024-12-14 06:43:55.063570] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xd21d30) 00:13:41.140 [2024-12-14 06:43:55.063576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.140 [2024-12-14 06:43:55.063583] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.140 [2024-12-14 06:43:55.063587] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.140 [2024-12-14 06:43:55.063591] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xd21d30) 00:13:41.140 [2024-12-14 06:43:55.063597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.140 [2024-12-14 06:43:55.063603] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.140 [2024-12-14 06:43:55.063607] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.140 [2024-12-14 06:43:55.063611] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd21d30) 00:13:41.140 [2024-12-14 06:43:55.063617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.140 [2024-12-14 06:43:55.063623] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:13:41.140 [2024-12-14 06:43:55.063641] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:41.140 [2024-12-14 06:43:55.063650] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.140 [2024-12-14 06:43:55.063654] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.140 [2024-12-14 06:43:55.063659] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd21d30) 00:13:41.140 [2024-12-14 06:43:55.063667] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.140 [2024-12-14 06:43:55.063692] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7ff30, cid 0, qid 0 00:13:41.140 [2024-12-14 06:43:55.063715] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd80090, cid 1, qid 0 00:13:41.140 [2024-12-14 06:43:55.063720] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd801f0, cid 2, qid 0 00:13:41.140 [2024-12-14 06:43:55.063725] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd80350, cid 3, qid 0 00:13:41.140 [2024-12-14 06:43:55.063730] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd804b0, cid 4, qid 0 00:13:41.140 [2024-12-14 06:43:55.063875] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.140 [2024-12-14 06:43:55.063883] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.140 [2024-12-14 06:43:55.063887] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.140 [2024-12-14 06:43:55.063891] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd804b0) on tqpair=0xd21d30 00:13:41.140 [2024-12-14 06:43:55.063897] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:13:41.140 [2024-12-14 06:43:55.063904] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:13:41.140 [2024-12-14 06:43:55.063933] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.140 [2024-12-14 06:43:55.063941] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.140 [2024-12-14 06:43:55.063945] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd21d30) 00:13:41.140 [2024-12-14 06:43:55.063953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.140 [2024-12-14 06:43:55.063974] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd804b0, cid 4, qid 0 00:13:41.141 [2024-12-14 06:43:55.064053] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:41.141 [2024-12-14 06:43:55.064061] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:41.141 [2024-12-14 06:43:55.064065] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:41.141 [2024-12-14 06:43:55.064069] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd21d30): datao=0, datal=4096, cccid=4 00:13:41.141 [2024-12-14 06:43:55.064074] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd804b0) on tqpair(0xd21d30): expected_datao=0, payload_size=4096 00:13:41.141 [2024-12-14 06:43:55.064083] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:41.141 [2024-12-14 06:43:55.064087] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:41.141 [2024-12-14 06:43:55.064096] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.141 [2024-12-14 06:43:55.064103] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.141 [2024-12-14 06:43:55.064107] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.141 [2024-12-14 06:43:55.064111] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd804b0) on tqpair=0xd21d30 00:13:41.141 [2024-12-14 06:43:55.064126] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:13:41.141 [2024-12-14 06:43:55.064167] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.141 [2024-12-14 06:43:55.064173] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.141 [2024-12-14 06:43:55.064186] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd21d30) 00:13:41.141 [2024-12-14 06:43:55.064193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.141 [2024-12-14 06:43:55.064201] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.141 [2024-12-14 06:43:55.064206] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.141 [2024-12-14 06:43:55.064209] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd21d30) 00:13:41.141 [2024-12-14 06:43:55.064216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.141 [2024-12-14 06:43:55.064255] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd804b0, cid 4, qid 0 00:13:41.141 [2024-12-14 06:43:55.064263] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd80610, cid 5, qid 0 00:13:41.141 [2024-12-14 06:43:55.064395] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:41.141 [2024-12-14 06:43:55.064402] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:41.141 [2024-12-14 06:43:55.064406] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:41.141 [2024-12-14 06:43:55.064410] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd21d30): datao=0, datal=1024, cccid=4 00:13:41.141 [2024-12-14 06:43:55.064415] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd804b0) on tqpair(0xd21d30): expected_datao=0, payload_size=1024 00:13:41.141 [2024-12-14 06:43:55.064422] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:41.141 [2024-12-14 06:43:55.064426] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:41.141 [2024-12-14 06:43:55.064432] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.141 [2024-12-14 06:43:55.064438] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.141 [2024-12-14 06:43:55.064441] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.141 [2024-12-14 06:43:55.064445] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd80610) on tqpair=0xd21d30 00:13:41.141 [2024-12-14 06:43:55.064477] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.141 [2024-12-14 06:43:55.064484] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.141 [2024-12-14 06:43:55.064488] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.141 [2024-12-14 06:43:55.064492] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd804b0) on tqpair=0xd21d30 00:13:41.141 [2024-12-14 06:43:55.064507] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.141 [2024-12-14 06:43:55.064513] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.141 [2024-12-14 06:43:55.064517] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd21d30) 00:13:41.141 [2024-12-14 06:43:55.064524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.141 [2024-12-14 06:43:55.064548] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd804b0, cid 4, qid 0 00:13:41.141 [2024-12-14 06:43:55.064632] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:41.141 [2024-12-14 06:43:55.064639] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:41.141 [2024-12-14 06:43:55.064642] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:41.141 [2024-12-14 06:43:55.064646] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd21d30): datao=0, datal=3072, cccid=4 00:13:41.141 [2024-12-14 06:43:55.064650] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd804b0) on tqpair(0xd21d30): expected_datao=0, payload_size=3072 00:13:41.141 [2024-12-14 06:43:55.064658] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:41.141 [2024-12-14 06:43:55.064661] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:41.141 [2024-12-14 06:43:55.064669] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.141 [2024-12-14 06:43:55.064675] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.141 [2024-12-14 06:43:55.064678] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.141 [2024-12-14 06:43:55.064682] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd804b0) on tqpair=0xd21d30 00:13:41.141 [2024-12-14 06:43:55.064692] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.141 [2024-12-14 06:43:55.064696] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.141 [2024-12-14 06:43:55.064700] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd21d30) 00:13:41.141 [2024-12-14 06:43:55.064707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.141 [2024-12-14 06:43:55.064729] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd804b0, cid 4, qid 0 00:13:41.141 [2024-12-14 06:43:55.064802] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:41.141 [2024-12-14 06:43:55.064809] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:41.141 [2024-12-14 06:43:55.064812] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:41.141 [2024-12-14 06:43:55.064816] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd21d30): datao=0, datal=8, cccid=4 00:13:41.141 [2024-12-14 06:43:55.064820] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd804b0) on tqpair(0xd21d30): expected_datao=0, payload_size=8 00:13:41.141 ===================================================== 00:13:41.141 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:13:41.141 ===================================================== 00:13:41.141 Controller Capabilities/Features 00:13:41.141 ================================ 00:13:41.141 Vendor ID: 0000 00:13:41.141 Subsystem Vendor ID: 0000 00:13:41.141 Serial Number: .................... 00:13:41.141 Model Number: ........................................ 00:13:41.141 Firmware Version: 24.01.1 00:13:41.141 Recommended Arb Burst: 0 00:13:41.141 IEEE OUI Identifier: 00 00 00 00:13:41.141 Multi-path I/O 00:13:41.141 May have multiple subsystem ports: No 00:13:41.141 May have multiple controllers: No 00:13:41.141 Associated with SR-IOV VF: No 00:13:41.141 Max Data Transfer Size: 131072 00:13:41.141 Max Number of Namespaces: 0 00:13:41.141 Max Number of I/O Queues: 1024 00:13:41.141 NVMe Specification Version (VS): 1.3 00:13:41.141 NVMe Specification Version (Identify): 1.3 00:13:41.141 Maximum Queue Entries: 128 00:13:41.141 Contiguous Queues Required: Yes 00:13:41.142 Arbitration Mechanisms Supported 00:13:41.142 Weighted Round Robin: Not Supported 00:13:41.142 Vendor Specific: Not Supported 00:13:41.142 Reset Timeout: 15000 ms 00:13:41.142 Doorbell Stride: 4 bytes 00:13:41.142 NVM Subsystem Reset: Not Supported 00:13:41.142 Command Sets Supported 00:13:41.142 NVM Command Set: Supported 00:13:41.142 Boot Partition: Not Supported 00:13:41.142 Memory Page Size Minimum: 4096 bytes 00:13:41.142 Memory Page Size Maximum: 4096 bytes 00:13:41.142 Persistent Memory Region: Not Supported 00:13:41.142 Optional Asynchronous Events Supported 00:13:41.142 Namespace Attribute Notices: Not Supported 00:13:41.142 Firmware Activation Notices: Not Supported 00:13:41.142 ANA Change Notices: Not Supported 00:13:41.142 PLE Aggregate Log Change Notices: Not Supported 00:13:41.142 LBA Status Info Alert Notices: Not Supported 00:13:41.142 EGE Aggregate Log Change Notices: Not Supported 00:13:41.142 Normal NVM Subsystem Shutdown event: Not Supported 00:13:41.142 Zone Descriptor Change Notices: Not Supported 00:13:41.142 Discovery Log Change Notices: Supported 00:13:41.142 Controller Attributes 00:13:41.142 128-bit Host Identifier: Not Supported 00:13:41.142 Non-Operational Permissive Mode: Not Supported 00:13:41.142 NVM Sets: Not Supported 00:13:41.142 Read Recovery Levels: Not Supported 00:13:41.142 Endurance Groups: Not Supported 00:13:41.142 Predictable Latency Mode: Not Supported 00:13:41.142 Traffic Based Keep ALive: Not Supported 00:13:41.142 Namespace Granularity: Not Supported 00:13:41.142 SQ Associations: Not Supported 00:13:41.142 UUID List: Not Supported 00:13:41.142 Multi-Domain Subsystem: Not Supported 00:13:41.142 Fixed Capacity Management: Not Supported 00:13:41.142 Variable Capacity Management: Not Supported 00:13:41.142 Delete Endurance Group: Not Supported 00:13:41.142 Delete NVM Set: Not Supported 00:13:41.142 Extended LBA Formats Supported: Not Supported 00:13:41.142 Flexible Data Placement Supported: Not Supported 00:13:41.142 00:13:41.142 Controller Memory Buffer Support 00:13:41.142 ================================ 00:13:41.142 Supported: No 00:13:41.142 00:13:41.142 Persistent Memory Region Support 00:13:41.142 ================================ 00:13:41.142 Supported: No 00:13:41.142 00:13:41.142 Admin Command Set Attributes 00:13:41.142 ============================ 00:13:41.142 Security Send/Receive: Not Supported 00:13:41.142 Format NVM: Not Supported 00:13:41.142 Firmware Activate/Download: Not Supported 00:13:41.142 Namespace Management: Not Supported 00:13:41.142 Device Self-Test: Not Supported 00:13:41.142 Directives: Not Supported 00:13:41.142 NVMe-MI: Not Supported 00:13:41.142 Virtualization Management: Not Supported 00:13:41.142 Doorbell Buffer Config: Not Supported 00:13:41.142 Get LBA Status Capability: Not Supported 00:13:41.142 Command & Feature Lockdown Capability: Not Supported 00:13:41.142 Abort Command Limit: 1 00:13:41.142 Async Event Request Limit: 4 00:13:41.142 Number of Firmware Slots: N/A 00:13:41.142 Firmware Slot 1 Read-Only: N/A 00:13:41.142 Firmware Activation Without Reset: N/A 00:13:41.142 Multiple Update Detection Support: N/A 00:13:41.142 Firmware Update Granularity: No Information Provided 00:13:41.142 Per-Namespace SMART Log: No 00:13:41.142 Asymmetric Namespace Access Log Page: Not Supported 00:13:41.142 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:13:41.142 Command Effects Log Page: Not Supported 00:13:41.142 Get Log Page Extended Data: Supported 00:13:41.142 Telemetry Log Pages: Not Supported 00:13:41.142 Persistent Event Log Pages: Not Supported 00:13:41.142 Supported Log Pages Log Page: May Support 00:13:41.142 Commands Supported & Effects Log Page: Not Supported 00:13:41.142 Feature Identifiers & Effects Log Page:May Support 00:13:41.142 NVMe-MI Commands & Effects Log Page: May Support 00:13:41.142 Data Area 4 for Telemetry Log: Not Supported 00:13:41.142 Error Log Page Entries Supported: 128 00:13:41.142 Keep Alive: Not Supported 00:13:41.142 00:13:41.142 NVM Command Set Attributes 00:13:41.142 ========================== 00:13:41.142 Submission Queue Entry Size 00:13:41.142 Max: 1 00:13:41.142 Min: 1 00:13:41.142 Completion Queue Entry Size 00:13:41.142 Max: 1 00:13:41.142 Min: 1 00:13:41.142 Number of Namespaces: 0 00:13:41.142 Compare Command: Not Supported 00:13:41.142 Write Uncorrectable Command: Not Supported 00:13:41.142 Dataset Management Command: Not Supported 00:13:41.142 Write Zeroes Command: Not Supported 00:13:41.142 Set Features Save Field: Not Supported 00:13:41.142 Reservations: Not Supported 00:13:41.142 Timestamp: Not Supported 00:13:41.142 Copy: Not Supported 00:13:41.142 Volatile Write Cache: Not Present 00:13:41.142 Atomic Write Unit (Normal): 1 00:13:41.142 Atomic Write Unit (PFail): 1 00:13:41.142 Atomic Compare & Write Unit: 1 00:13:41.142 Fused Compare & Write: Supported 00:13:41.142 Scatter-Gather List 00:13:41.142 SGL Command Set: Supported 00:13:41.142 SGL Keyed: Supported 00:13:41.142 SGL Bit Bucket Descriptor: Not Supported 00:13:41.142 SGL Metadata Pointer: Not Supported 00:13:41.142 Oversized SGL: Not Supported 00:13:41.142 SGL Metadata Address: Not Supported 00:13:41.142 SGL Offset: Supported 00:13:41.142 Transport SGL Data Block: Not Supported 00:13:41.142 Replay Protected Memory Block: Not Supported 00:13:41.142 00:13:41.142 Firmware Slot Information 00:13:41.142 ========================= 00:13:41.142 Active slot: 0 00:13:41.142 00:13:41.142 00:13:41.142 Error Log 00:13:41.142 ========= 00:13:41.142 00:13:41.142 Active Namespaces 00:13:41.142 ================= 00:13:41.142 Discovery Log Page 00:13:41.142 ================== 00:13:41.142 Generation Counter: 2 00:13:41.142 Number of Records: 2 00:13:41.142 Record Format: 0 00:13:41.142 00:13:41.142 Discovery Log Entry 0 00:13:41.142 ---------------------- 00:13:41.142 Transport Type: 3 (TCP) 00:13:41.143 Address Family: 1 (IPv4) 00:13:41.143 Subsystem Type: 3 (Current Discovery Subsystem) 00:13:41.143 Entry Flags: 00:13:41.143 Duplicate Returned Information: 1 00:13:41.143 Explicit Persistent Connection Support for Discovery: 1 00:13:41.143 Transport Requirements: 00:13:41.143 Secure Channel: Not Required 00:13:41.143 Port ID: 0 (0x0000) 00:13:41.143 Controller ID: 65535 (0xffff) 00:13:41.143 Admin Max SQ Size: 128 00:13:41.143 Transport Service Identifier: 4420 00:13:41.143 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:13:41.143 Transport Address: 10.0.0.2 00:13:41.143 Discovery Log Entry 1 00:13:41.143 ---------------------- 00:13:41.143 Transport Type: 3 (TCP) 00:13:41.143 Address Family: 1 (IPv4) 00:13:41.143 Subsystem Type: 2 (NVM Subsystem) 00:13:41.143 Entry Flags: 00:13:41.143 Duplicate Returned Information: 0 00:13:41.143 Explicit Persistent Connection Support for Discovery: 0 00:13:41.143 Transport Requirements: 00:13:41.143 Secure Channel: Not Required 00:13:41.143 Port ID: 0 (0x0000) 00:13:41.143 Controller ID: 65535 (0xffff) 00:13:41.143 Admin Max SQ Size: 128 00:13:41.143 Transport Service Identifier: 4420 00:13:41.143 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:13:41.143 Transport Address: 10.0.0.2 [2024-12-14 06:43:55.064827] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:41.143 [2024-12-14 06:43:55.064832] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:41.143 [2024-12-14 06:43:55.064846] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.143 [2024-12-14 06:43:55.064853] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.143 [2024-12-14 06:43:55.064857] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.143 [2024-12-14 06:43:55.064861] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd804b0) on tqpair=0xd21d30 00:13:41.143 [2024-12-14 06:43:55.065016] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:13:41.143 [2024-12-14 06:43:55.065046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.143 [2024-12-14 06:43:55.065058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.143 [2024-12-14 06:43:55.065064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.143 [2024-12-14 06:43:55.065070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.143 [2024-12-14 06:43:55.065080] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.143 [2024-12-14 06:43:55.065085] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.143 [2024-12-14 06:43:55.065089] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd21d30) 00:13:41.143 [2024-12-14 06:43:55.065097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.143 [2024-12-14 06:43:55.065124] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd80350, cid 3, qid 0 00:13:41.143 [2024-12-14 06:43:55.065196] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.143 [2024-12-14 06:43:55.065203] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.143 [2024-12-14 06:43:55.065207] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.143 [2024-12-14 06:43:55.065211] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd80350) on tqpair=0xd21d30 00:13:41.143 [2024-12-14 06:43:55.065219] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.143 [2024-12-14 06:43:55.065224] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.143 [2024-12-14 06:43:55.065227] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd21d30) 00:13:41.143 [2024-12-14 06:43:55.065235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.143 [2024-12-14 06:43:55.065271] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd80350, cid 3, qid 0 00:13:41.143 [2024-12-14 06:43:55.065351] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.143 [2024-12-14 06:43:55.065358] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.143 [2024-12-14 06:43:55.065361] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.143 [2024-12-14 06:43:55.065365] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd80350) on tqpair=0xd21d30 00:13:41.143 [2024-12-14 06:43:55.065370] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:13:41.143 [2024-12-14 06:43:55.065375] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:13:41.143 [2024-12-14 06:43:55.065385] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.143 [2024-12-14 06:43:55.065389] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.143 [2024-12-14 06:43:55.065393] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd21d30) 00:13:41.143 [2024-12-14 06:43:55.065400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.143 [2024-12-14 06:43:55.065416] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd80350, cid 3, qid 0 00:13:41.143 [2024-12-14 06:43:55.065477] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.143 [2024-12-14 06:43:55.065484] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.143 [2024-12-14 06:43:55.065487] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.143 [2024-12-14 06:43:55.065491] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd80350) on tqpair=0xd21d30 00:13:41.143 [2024-12-14 06:43:55.065502] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.143 [2024-12-14 06:43:55.065506] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.143 [2024-12-14 06:43:55.065510] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd21d30) 00:13:41.143 [2024-12-14 06:43:55.065517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.143 [2024-12-14 06:43:55.065532] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd80350, cid 3, qid 0 00:13:41.143 [2024-12-14 06:43:55.065590] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.143 [2024-12-14 06:43:55.065596] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.143 [2024-12-14 06:43:55.065600] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.143 [2024-12-14 06:43:55.065604] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd80350) on tqpair=0xd21d30 00:13:41.143 [2024-12-14 06:43:55.065614] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.143 [2024-12-14 06:43:55.065618] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.143 [2024-12-14 06:43:55.065622] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd21d30) 00:13:41.143 [2024-12-14 06:43:55.065629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.143 [2024-12-14 06:43:55.065644] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd80350, cid 3, qid 0 00:13:41.143 [2024-12-14 06:43:55.065707] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.144 [2024-12-14 06:43:55.065713] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.144 [2024-12-14 06:43:55.065717] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.144 [2024-12-14 06:43:55.065721] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd80350) on tqpair=0xd21d30 00:13:41.144 [2024-12-14 06:43:55.065731] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.144 [2024-12-14 06:43:55.065735] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.144 [2024-12-14 06:43:55.065746] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd21d30) 00:13:41.144 [2024-12-14 06:43:55.065752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.144 [2024-12-14 06:43:55.065768] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd80350, cid 3, qid 0 00:13:41.144 [2024-12-14 06:43:55.065829] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.144 [2024-12-14 06:43:55.065835] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.144 [2024-12-14 06:43:55.065839] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.144 [2024-12-14 06:43:55.065843] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd80350) on tqpair=0xd21d30 00:13:41.144 [2024-12-14 06:43:55.065853] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.144 [2024-12-14 06:43:55.065857] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.144 [2024-12-14 06:43:55.065861] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd21d30) 00:13:41.144 [2024-12-14 06:43:55.065867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.144 [2024-12-14 06:43:55.065883] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd80350, cid 3, qid 0 00:13:41.144 [2024-12-14 06:43:55.068961] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.144 [2024-12-14 06:43:55.068979] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.144 [2024-12-14 06:43:55.068984] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.144 [2024-12-14 06:43:55.068989] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd80350) on tqpair=0xd21d30 00:13:41.144 [2024-12-14 06:43:55.069001] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.144 [2024-12-14 06:43:55.069006] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.144 [2024-12-14 06:43:55.069010] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd21d30) 00:13:41.144 [2024-12-14 06:43:55.069034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.144 [2024-12-14 06:43:55.069059] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd80350, cid 3, qid 0 00:13:41.144 [2024-12-14 06:43:55.069116] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.144 [2024-12-14 06:43:55.069139] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.144 [2024-12-14 06:43:55.069143] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.144 [2024-12-14 06:43:55.069147] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd80350) on tqpair=0xd21d30 00:13:41.144 [2024-12-14 06:43:55.069155] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 3 milliseconds 00:13:41.144 00:13:41.144 06:43:55 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:13:41.144 [2024-12-14 06:43:55.109360] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:41.144 [2024-12-14 06:43:55.109412] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68499 ] 00:13:41.408 [2024-12-14 06:43:55.247207] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:13:41.408 [2024-12-14 06:43:55.247278] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:13:41.408 [2024-12-14 06:43:55.247285] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:13:41.408 [2024-12-14 06:43:55.247297] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:13:41.408 [2024-12-14 06:43:55.247307] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:13:41.408 [2024-12-14 06:43:55.247421] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:13:41.408 [2024-12-14 06:43:55.247486] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1030d30 0 00:13:41.408 [2024-12-14 06:43:55.259975] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:13:41.408 [2024-12-14 06:43:55.259997] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:13:41.408 [2024-12-14 06:43:55.260019] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:13:41.408 [2024-12-14 06:43:55.260023] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:13:41.408 [2024-12-14 06:43:55.260062] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.408 [2024-12-14 06:43:55.260069] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.408 [2024-12-14 06:43:55.260073] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1030d30) 00:13:41.408 [2024-12-14 06:43:55.260085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:41.408 [2024-12-14 06:43:55.260114] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108ef30, cid 0, qid 0 00:13:41.408 [2024-12-14 06:43:55.267987] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.408 [2024-12-14 06:43:55.268008] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.408 [2024-12-14 06:43:55.268030] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.408 [2024-12-14 06:43:55.268035] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108ef30) on tqpair=0x1030d30 00:13:41.408 [2024-12-14 06:43:55.268047] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:13:41.408 [2024-12-14 06:43:55.268055] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:13:41.408 [2024-12-14 06:43:55.268062] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:13:41.408 [2024-12-14 06:43:55.268077] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.408 [2024-12-14 06:43:55.268083] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.408 [2024-12-14 06:43:55.268087] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1030d30) 00:13:41.408 [2024-12-14 06:43:55.268096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.408 [2024-12-14 06:43:55.268123] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108ef30, cid 0, qid 0 00:13:41.408 [2024-12-14 06:43:55.268181] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.408 [2024-12-14 06:43:55.268188] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.408 [2024-12-14 06:43:55.268192] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.408 [2024-12-14 06:43:55.268196] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108ef30) on tqpair=0x1030d30 00:13:41.408 [2024-12-14 06:43:55.268202] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:13:41.408 [2024-12-14 06:43:55.268210] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:13:41.408 [2024-12-14 06:43:55.268218] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.408 [2024-12-14 06:43:55.268222] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.408 [2024-12-14 06:43:55.268226] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1030d30) 00:13:41.408 [2024-12-14 06:43:55.268250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.408 [2024-12-14 06:43:55.268268] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108ef30, cid 0, qid 0 00:13:41.408 [2024-12-14 06:43:55.268660] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.408 [2024-12-14 06:43:55.268672] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.408 [2024-12-14 06:43:55.268677] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.408 [2024-12-14 06:43:55.268681] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108ef30) on tqpair=0x1030d30 00:13:41.408 [2024-12-14 06:43:55.268703] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:13:41.408 [2024-12-14 06:43:55.268712] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:13:41.408 [2024-12-14 06:43:55.268720] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.408 [2024-12-14 06:43:55.268724] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.408 [2024-12-14 06:43:55.268728] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1030d30) 00:13:41.408 [2024-12-14 06:43:55.268735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.408 [2024-12-14 06:43:55.268753] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108ef30, cid 0, qid 0 00:13:41.409 [2024-12-14 06:43:55.268814] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.409 [2024-12-14 06:43:55.268820] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.409 [2024-12-14 06:43:55.268824] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.409 [2024-12-14 06:43:55.268828] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108ef30) on tqpair=0x1030d30 00:13:41.409 [2024-12-14 06:43:55.268850] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:41.409 [2024-12-14 06:43:55.268861] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.409 [2024-12-14 06:43:55.268865] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.409 [2024-12-14 06:43:55.268869] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1030d30) 00:13:41.409 [2024-12-14 06:43:55.268876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.409 [2024-12-14 06:43:55.268909] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108ef30, cid 0, qid 0 00:13:41.409 [2024-12-14 06:43:55.269353] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.409 [2024-12-14 06:43:55.269370] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.409 [2024-12-14 06:43:55.269375] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.409 [2024-12-14 06:43:55.269379] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108ef30) on tqpair=0x1030d30 00:13:41.409 [2024-12-14 06:43:55.269386] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:13:41.409 [2024-12-14 06:43:55.269391] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:13:41.409 [2024-12-14 06:43:55.269415] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:41.409 [2024-12-14 06:43:55.269525] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:13:41.409 [2024-12-14 06:43:55.269530] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:41.409 [2024-12-14 06:43:55.269540] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.409 [2024-12-14 06:43:55.269544] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.409 [2024-12-14 06:43:55.269548] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1030d30) 00:13:41.409 [2024-12-14 06:43:55.269556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.409 [2024-12-14 06:43:55.269592] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108ef30, cid 0, qid 0 00:13:41.409 [2024-12-14 06:43:55.269940] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.409 [2024-12-14 06:43:55.269968] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.409 [2024-12-14 06:43:55.269973] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.409 [2024-12-14 06:43:55.269978] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108ef30) on tqpair=0x1030d30 00:13:41.409 [2024-12-14 06:43:55.269984] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:41.409 [2024-12-14 06:43:55.269996] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.409 [2024-12-14 06:43:55.270001] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.409 [2024-12-14 06:43:55.270005] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1030d30) 00:13:41.409 [2024-12-14 06:43:55.270013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.409 [2024-12-14 06:43:55.270033] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108ef30, cid 0, qid 0 00:13:41.409 [2024-12-14 06:43:55.270277] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.409 [2024-12-14 06:43:55.270291] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.409 [2024-12-14 06:43:55.270295] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.409 [2024-12-14 06:43:55.270299] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108ef30) on tqpair=0x1030d30 00:13:41.409 [2024-12-14 06:43:55.270305] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:41.409 [2024-12-14 06:43:55.270311] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:13:41.409 [2024-12-14 06:43:55.270319] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:13:41.409 [2024-12-14 06:43:55.270351] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:13:41.409 [2024-12-14 06:43:55.270362] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.409 [2024-12-14 06:43:55.270367] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.409 [2024-12-14 06:43:55.270371] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1030d30) 00:13:41.409 [2024-12-14 06:43:55.270379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.409 [2024-12-14 06:43:55.270399] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108ef30, cid 0, qid 0 00:13:41.409 [2024-12-14 06:43:55.270904] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:41.409 [2024-12-14 06:43:55.270924] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:41.409 [2024-12-14 06:43:55.270929] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:41.409 [2024-12-14 06:43:55.270933] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1030d30): datao=0, datal=4096, cccid=0 00:13:41.409 [2024-12-14 06:43:55.270939] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x108ef30) on tqpair(0x1030d30): expected_datao=0, payload_size=4096 00:13:41.409 [2024-12-14 06:43:55.270948] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:41.409 [2024-12-14 06:43:55.270953] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:41.409 [2024-12-14 06:43:55.270962] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.409 [2024-12-14 06:43:55.270969] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.409 [2024-12-14 06:43:55.270973] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.409 [2024-12-14 06:43:55.270977] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108ef30) on tqpair=0x1030d30 00:13:41.409 [2024-12-14 06:43:55.270988] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:13:41.409 [2024-12-14 06:43:55.270994] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:13:41.409 [2024-12-14 06:43:55.270999] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:13:41.409 [2024-12-14 06:43:55.271003] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:13:41.409 [2024-12-14 06:43:55.271008] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:13:41.409 [2024-12-14 06:43:55.271014] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:13:41.409 [2024-12-14 06:43:55.271029] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:13:41.409 [2024-12-14 06:43:55.271038] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.409 [2024-12-14 06:43:55.271043] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.409 [2024-12-14 06:43:55.271047] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1030d30) 00:13:41.409 [2024-12-14 06:43:55.271056] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:41.409 [2024-12-14 06:43:55.271082] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108ef30, cid 0, qid 0 00:13:41.409 [2024-12-14 06:43:55.271466] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.409 [2024-12-14 06:43:55.271481] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.409 [2024-12-14 06:43:55.271486] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.409 [2024-12-14 06:43:55.271490] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108ef30) on tqpair=0x1030d30 00:13:41.409 [2024-12-14 06:43:55.271500] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.409 [2024-12-14 06:43:55.271504] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.409 [2024-12-14 06:43:55.271508] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1030d30) 00:13:41.409 [2024-12-14 06:43:55.271516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.409 [2024-12-14 06:43:55.271522] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.409 [2024-12-14 06:43:55.271526] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.409 [2024-12-14 06:43:55.271530] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1030d30) 00:13:41.409 [2024-12-14 06:43:55.271536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.409 [2024-12-14 06:43:55.271543] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.409 [2024-12-14 06:43:55.271547] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.409 [2024-12-14 06:43:55.271551] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1030d30) 00:13:41.409 [2024-12-14 06:43:55.271557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.409 [2024-12-14 06:43:55.271563] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.409 [2024-12-14 06:43:55.271567] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.409 [2024-12-14 06:43:55.271570] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1030d30) 00:13:41.409 [2024-12-14 06:43:55.271576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.409 [2024-12-14 06:43:55.271582] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:41.409 [2024-12-14 06:43:55.271596] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:41.409 [2024-12-14 06:43:55.271603] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.409 [2024-12-14 06:43:55.271607] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.410 [2024-12-14 06:43:55.271611] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1030d30) 00:13:41.410 [2024-12-14 06:43:55.271619] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.410 [2024-12-14 06:43:55.271640] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108ef30, cid 0, qid 0 00:13:41.410 [2024-12-14 06:43:55.271648] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108f090, cid 1, qid 0 00:13:41.410 [2024-12-14 06:43:55.271653] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108f1f0, cid 2, qid 0 00:13:41.410 [2024-12-14 06:43:55.271657] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108f350, cid 3, qid 0 00:13:41.410 [2024-12-14 06:43:55.271663] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108f4b0, cid 4, qid 0 00:13:41.410 [2024-12-14 06:43:55.275958] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.410 [2024-12-14 06:43:55.275983] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.410 [2024-12-14 06:43:55.275988] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.410 [2024-12-14 06:43:55.275993] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108f4b0) on tqpair=0x1030d30 00:13:41.410 [2024-12-14 06:43:55.276000] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:13:41.410 [2024-12-14 06:43:55.276006] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:41.410 [2024-12-14 06:43:55.276030] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:13:41.410 [2024-12-14 06:43:55.276055] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:41.410 [2024-12-14 06:43:55.276067] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.410 [2024-12-14 06:43:55.276072] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.410 [2024-12-14 06:43:55.276076] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1030d30) 00:13:41.410 [2024-12-14 06:43:55.276084] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:41.410 [2024-12-14 06:43:55.276112] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108f4b0, cid 4, qid 0 00:13:41.410 [2024-12-14 06:43:55.276168] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.410 [2024-12-14 06:43:55.276192] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.410 [2024-12-14 06:43:55.276196] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.410 [2024-12-14 06:43:55.276200] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108f4b0) on tqpair=0x1030d30 00:13:41.410 [2024-12-14 06:43:55.276293] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:13:41.410 [2024-12-14 06:43:55.276304] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:41.410 [2024-12-14 06:43:55.276312] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.410 [2024-12-14 06:43:55.276316] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.410 [2024-12-14 06:43:55.276320] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1030d30) 00:13:41.410 [2024-12-14 06:43:55.276327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.410 [2024-12-14 06:43:55.276346] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108f4b0, cid 4, qid 0 00:13:41.410 [2024-12-14 06:43:55.276701] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:41.410 [2024-12-14 06:43:55.276714] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:41.410 [2024-12-14 06:43:55.276718] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:41.410 [2024-12-14 06:43:55.276722] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1030d30): datao=0, datal=4096, cccid=4 00:13:41.410 [2024-12-14 06:43:55.276727] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x108f4b0) on tqpair(0x1030d30): expected_datao=0, payload_size=4096 00:13:41.410 [2024-12-14 06:43:55.276736] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:41.410 [2024-12-14 06:43:55.276740] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:41.410 [2024-12-14 06:43:55.276749] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.410 [2024-12-14 06:43:55.276756] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.410 [2024-12-14 06:43:55.276759] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.410 [2024-12-14 06:43:55.276764] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108f4b0) on tqpair=0x1030d30 00:13:41.410 [2024-12-14 06:43:55.276796] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:13:41.410 [2024-12-14 06:43:55.276807] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:13:41.410 [2024-12-14 06:43:55.276817] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:13:41.410 [2024-12-14 06:43:55.276837] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.410 [2024-12-14 06:43:55.276841] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.410 [2024-12-14 06:43:55.276845] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1030d30) 00:13:41.410 [2024-12-14 06:43:55.276852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.410 [2024-12-14 06:43:55.276872] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108f4b0, cid 4, qid 0 00:13:41.410 [2024-12-14 06:43:55.276980] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:41.410 [2024-12-14 06:43:55.276989] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:41.410 [2024-12-14 06:43:55.276993] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:41.410 [2024-12-14 06:43:55.276997] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1030d30): datao=0, datal=4096, cccid=4 00:13:41.410 [2024-12-14 06:43:55.277001] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x108f4b0) on tqpair(0x1030d30): expected_datao=0, payload_size=4096 00:13:41.410 [2024-12-14 06:43:55.277010] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:41.410 [2024-12-14 06:43:55.277014] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:41.410 [2024-12-14 06:43:55.277022] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.410 [2024-12-14 06:43:55.277028] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.410 [2024-12-14 06:43:55.277032] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.410 [2024-12-14 06:43:55.277036] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108f4b0) on tqpair=0x1030d30 00:13:41.410 [2024-12-14 06:43:55.277052] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:41.410 [2024-12-14 06:43:55.277064] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:41.410 [2024-12-14 06:43:55.277073] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.410 [2024-12-14 06:43:55.277077] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.410 [2024-12-14 06:43:55.277081] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1030d30) 00:13:41.410 [2024-12-14 06:43:55.277090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.410 [2024-12-14 06:43:55.277111] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108f4b0, cid 4, qid 0 00:13:41.410 [2024-12-14 06:43:55.277165] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:41.410 [2024-12-14 06:43:55.277172] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:41.410 [2024-12-14 06:43:55.277176] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:41.410 [2024-12-14 06:43:55.277180] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1030d30): datao=0, datal=4096, cccid=4 00:13:41.410 [2024-12-14 06:43:55.277184] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x108f4b0) on tqpair(0x1030d30): expected_datao=0, payload_size=4096 00:13:41.410 [2024-12-14 06:43:55.277192] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:41.410 [2024-12-14 06:43:55.277196] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:41.410 [2024-12-14 06:43:55.277204] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.410 [2024-12-14 06:43:55.277210] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.410 [2024-12-14 06:43:55.277214] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.410 [2024-12-14 06:43:55.277218] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108f4b0) on tqpair=0x1030d30 00:13:41.410 [2024-12-14 06:43:55.277228] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:41.410 [2024-12-14 06:43:55.277237] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:13:41.410 [2024-12-14 06:43:55.277250] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:13:41.410 [2024-12-14 06:43:55.277257] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:41.410 [2024-12-14 06:43:55.277263] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:13:41.410 [2024-12-14 06:43:55.277268] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:13:41.410 [2024-12-14 06:43:55.277288] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:13:41.410 [2024-12-14 06:43:55.277293] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:13:41.410 [2024-12-14 06:43:55.277322] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.410 [2024-12-14 06:43:55.277332] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.410 [2024-12-14 06:43:55.277336] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1030d30) 00:13:41.410 [2024-12-14 06:43:55.277344] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.410 [2024-12-14 06:43:55.277352] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.410 [2024-12-14 06:43:55.277356] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.410 [2024-12-14 06:43:55.277359] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1030d30) 00:13:41.410 [2024-12-14 06:43:55.277366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:13:41.410 [2024-12-14 06:43:55.277395] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108f4b0, cid 4, qid 0 00:13:41.411 [2024-12-14 06:43:55.277403] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108f610, cid 5, qid 0 00:13:41.411 [2024-12-14 06:43:55.277487] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.411 [2024-12-14 06:43:55.277494] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.411 [2024-12-14 06:43:55.277498] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.277502] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108f4b0) on tqpair=0x1030d30 00:13:41.411 [2024-12-14 06:43:55.277510] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.411 [2024-12-14 06:43:55.277516] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.411 [2024-12-14 06:43:55.277519] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.277523] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108f610) on tqpair=0x1030d30 00:13:41.411 [2024-12-14 06:43:55.277534] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.277539] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.277543] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1030d30) 00:13:41.411 [2024-12-14 06:43:55.277550] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.411 [2024-12-14 06:43:55.277567] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108f610, cid 5, qid 0 00:13:41.411 [2024-12-14 06:43:55.277616] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.411 [2024-12-14 06:43:55.277623] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.411 [2024-12-14 06:43:55.277626] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.277642] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108f610) on tqpair=0x1030d30 00:13:41.411 [2024-12-14 06:43:55.277654] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.277658] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.277662] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1030d30) 00:13:41.411 [2024-12-14 06:43:55.277669] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.411 [2024-12-14 06:43:55.277686] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108f610, cid 5, qid 0 00:13:41.411 [2024-12-14 06:43:55.278159] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.411 [2024-12-14 06:43:55.278176] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.411 [2024-12-14 06:43:55.278181] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.278186] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108f610) on tqpair=0x1030d30 00:13:41.411 [2024-12-14 06:43:55.278199] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.278204] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.278208] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1030d30) 00:13:41.411 [2024-12-14 06:43:55.278216] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.411 [2024-12-14 06:43:55.278251] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108f610, cid 5, qid 0 00:13:41.411 [2024-12-14 06:43:55.278320] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.411 [2024-12-14 06:43:55.278343] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.411 [2024-12-14 06:43:55.278347] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.278351] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108f610) on tqpair=0x1030d30 00:13:41.411 [2024-12-14 06:43:55.278367] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.278372] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.278376] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1030d30) 00:13:41.411 [2024-12-14 06:43:55.278383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.411 [2024-12-14 06:43:55.278391] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.278395] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.278400] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1030d30) 00:13:41.411 [2024-12-14 06:43:55.278406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.411 [2024-12-14 06:43:55.278414] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.278418] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.278421] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1030d30) 00:13:41.411 [2024-12-14 06:43:55.278428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.411 [2024-12-14 06:43:55.278436] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.278440] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.278444] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1030d30) 00:13:41.411 [2024-12-14 06:43:55.278450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.411 [2024-12-14 06:43:55.278470] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108f610, cid 5, qid 0 00:13:41.411 [2024-12-14 06:43:55.278477] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108f4b0, cid 4, qid 0 00:13:41.411 [2024-12-14 06:43:55.278483] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108f770, cid 6, qid 0 00:13:41.411 [2024-12-14 06:43:55.278488] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108f8d0, cid 7, qid 0 00:13:41.411 [2024-12-14 06:43:55.279040] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:41.411 [2024-12-14 06:43:55.279056] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:41.411 [2024-12-14 06:43:55.279076] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.279080] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1030d30): datao=0, datal=8192, cccid=5 00:13:41.411 [2024-12-14 06:43:55.279101] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x108f610) on tqpair(0x1030d30): expected_datao=0, payload_size=8192 00:13:41.411 [2024-12-14 06:43:55.279120] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.279126] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.279132] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:41.411 [2024-12-14 06:43:55.279139] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:41.411 [2024-12-14 06:43:55.279143] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.279147] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1030d30): datao=0, datal=512, cccid=4 00:13:41.411 [2024-12-14 06:43:55.279162] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x108f4b0) on tqpair(0x1030d30): expected_datao=0, payload_size=512 00:13:41.411 [2024-12-14 06:43:55.279170] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.279174] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.279180] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:41.411 [2024-12-14 06:43:55.279186] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:41.411 [2024-12-14 06:43:55.279190] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.279194] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1030d30): datao=0, datal=512, cccid=6 00:13:41.411 [2024-12-14 06:43:55.279213] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x108f770) on tqpair(0x1030d30): expected_datao=0, payload_size=512 00:13:41.411 [2024-12-14 06:43:55.279221] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.279224] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.279230] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:13:41.411 [2024-12-14 06:43:55.279236] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:13:41.411 [2024-12-14 06:43:55.279240] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.279243] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1030d30): datao=0, datal=4096, cccid=7 00:13:41.411 [2024-12-14 06:43:55.279248] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x108f8d0) on tqpair(0x1030d30): expected_datao=0, payload_size=4096 00:13:41.411 [2024-12-14 06:43:55.279255] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.279259] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.279265] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.411 [2024-12-14 06:43:55.279271] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.411 [2024-12-14 06:43:55.279274] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.279278] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108f610) on tqpair=0x1030d30 00:13:41.411 [2024-12-14 06:43:55.279296] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.411 [2024-12-14 06:43:55.279303] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.411 [2024-12-14 06:43:55.279307] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.411 [2024-12-14 06:43:55.279311] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108f4b0) on tqpair=0x1030d30 00:13:41.411 [2024-12-14 06:43:55.279322] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.411 [2024-12-14 06:43:55.279329] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.411 [2024-12-14 06:43:55.279333] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.411 ===================================================== 00:13:41.411 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:41.411 ===================================================== 00:13:41.411 Controller Capabilities/Features 00:13:41.411 ================================ 00:13:41.412 Vendor ID: 8086 00:13:41.412 Subsystem Vendor ID: 8086 00:13:41.412 Serial Number: SPDK00000000000001 00:13:41.412 Model Number: SPDK bdev Controller 00:13:41.412 Firmware Version: 24.01.1 00:13:41.412 Recommended Arb Burst: 6 00:13:41.412 IEEE OUI Identifier: e4 d2 5c 00:13:41.412 Multi-path I/O 00:13:41.412 May have multiple subsystem ports: Yes 00:13:41.412 May have multiple controllers: Yes 00:13:41.412 Associated with SR-IOV VF: No 00:13:41.412 Max Data Transfer Size: 131072 00:13:41.412 Max Number of Namespaces: 32 00:13:41.412 Max Number of I/O Queues: 127 00:13:41.412 NVMe Specification Version (VS): 1.3 00:13:41.412 NVMe Specification Version (Identify): 1.3 00:13:41.412 Maximum Queue Entries: 128 00:13:41.412 Contiguous Queues Required: Yes 00:13:41.412 Arbitration Mechanisms Supported 00:13:41.412 Weighted Round Robin: Not Supported 00:13:41.412 Vendor Specific: Not Supported 00:13:41.412 Reset Timeout: 15000 ms 00:13:41.412 Doorbell Stride: 4 bytes 00:13:41.412 NVM Subsystem Reset: Not Supported 00:13:41.412 Command Sets Supported 00:13:41.412 NVM Command Set: Supported 00:13:41.412 Boot Partition: Not Supported 00:13:41.412 Memory Page Size Minimum: 4096 bytes 00:13:41.412 Memory Page Size Maximum: 4096 bytes 00:13:41.412 Persistent Memory Region: Not Supported 00:13:41.412 Optional Asynchronous Events Supported 00:13:41.412 Namespace Attribute Notices: Supported 00:13:41.412 Firmware Activation Notices: Not Supported 00:13:41.412 ANA Change Notices: Not Supported 00:13:41.412 PLE Aggregate Log Change Notices: Not Supported 00:13:41.412 LBA Status Info Alert Notices: Not Supported 00:13:41.412 EGE Aggregate Log Change Notices: Not Supported 00:13:41.412 Normal NVM Subsystem Shutdown event: Not Supported 00:13:41.412 Zone Descriptor Change Notices: Not Supported 00:13:41.412 Discovery Log Change Notices: Not Supported 00:13:41.412 Controller Attributes 00:13:41.412 128-bit Host Identifier: Supported 00:13:41.412 Non-Operational Permissive Mode: Not Supported 00:13:41.412 NVM Sets: Not Supported 00:13:41.412 Read Recovery Levels: Not Supported 00:13:41.412 Endurance Groups: Not Supported 00:13:41.412 Predictable Latency Mode: Not Supported 00:13:41.412 Traffic Based Keep ALive: Not Supported 00:13:41.412 Namespace Granularity: Not Supported 00:13:41.412 SQ Associations: Not Supported 00:13:41.412 UUID List: Not Supported 00:13:41.412 Multi-Domain Subsystem: Not Supported 00:13:41.412 Fixed Capacity Management: Not Supported 00:13:41.412 Variable Capacity Management: Not Supported 00:13:41.412 Delete Endurance Group: Not Supported 00:13:41.412 Delete NVM Set: Not Supported 00:13:41.412 Extended LBA Formats Supported: Not Supported 00:13:41.412 Flexible Data Placement Supported: Not Supported 00:13:41.412 00:13:41.412 Controller Memory Buffer Support 00:13:41.412 ================================ 00:13:41.412 Supported: No 00:13:41.412 00:13:41.412 Persistent Memory Region Support 00:13:41.412 ================================ 00:13:41.412 Supported: No 00:13:41.412 00:13:41.412 Admin Command Set Attributes 00:13:41.412 ============================ 00:13:41.412 Security Send/Receive: Not Supported 00:13:41.412 Format NVM: Not Supported 00:13:41.412 Firmware Activate/Download: Not Supported 00:13:41.412 Namespace Management: Not Supported 00:13:41.412 Device Self-Test: Not Supported 00:13:41.412 Directives: Not Supported 00:13:41.412 NVMe-MI: Not Supported 00:13:41.412 Virtualization Management: Not Supported 00:13:41.412 Doorbell Buffer Config: Not Supported 00:13:41.412 Get LBA Status Capability: Not Supported 00:13:41.412 Command & Feature Lockdown Capability: Not Supported 00:13:41.412 Abort Command Limit: 4 00:13:41.412 Async Event Request Limit: 4 00:13:41.412 Number of Firmware Slots: N/A 00:13:41.412 Firmware Slot 1 Read-Only: N/A 00:13:41.412 Firmware Activation Without Reset: [2024-12-14 06:43:55.279337] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108f770) on tqpair=0x1030d30 00:13:41.412 [2024-12-14 06:43:55.279346] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.412 [2024-12-14 06:43:55.279352] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.412 [2024-12-14 06:43:55.279356] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.412 [2024-12-14 06:43:55.279360] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108f8d0) on tqpair=0x1030d30 00:13:41.412 N/A 00:13:41.412 Multiple Update Detection Support: N/A 00:13:41.412 Firmware Update Granularity: No Information Provided 00:13:41.412 Per-Namespace SMART Log: No 00:13:41.412 Asymmetric Namespace Access Log Page: Not Supported 00:13:41.412 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:13:41.412 Command Effects Log Page: Supported 00:13:41.412 Get Log Page Extended Data: Supported 00:13:41.412 Telemetry Log Pages: Not Supported 00:13:41.412 Persistent Event Log Pages: Not Supported 00:13:41.412 Supported Log Pages Log Page: May Support 00:13:41.412 Commands Supported & Effects Log Page: Not Supported 00:13:41.412 Feature Identifiers & Effects Log Page:May Support 00:13:41.412 NVMe-MI Commands & Effects Log Page: May Support 00:13:41.412 Data Area 4 for Telemetry Log: Not Supported 00:13:41.412 Error Log Page Entries Supported: 128 00:13:41.412 Keep Alive: Supported 00:13:41.412 Keep Alive Granularity: 10000 ms 00:13:41.412 00:13:41.412 NVM Command Set Attributes 00:13:41.412 ========================== 00:13:41.412 Submission Queue Entry Size 00:13:41.412 Max: 64 00:13:41.412 Min: 64 00:13:41.412 Completion Queue Entry Size 00:13:41.412 Max: 16 00:13:41.412 Min: 16 00:13:41.412 Number of Namespaces: 32 00:13:41.412 Compare Command: Supported 00:13:41.412 Write Uncorrectable Command: Not Supported 00:13:41.412 Dataset Management Command: Supported 00:13:41.412 Write Zeroes Command: Supported 00:13:41.412 Set Features Save Field: Not Supported 00:13:41.412 Reservations: Supported 00:13:41.412 Timestamp: Not Supported 00:13:41.412 Copy: Supported 00:13:41.412 Volatile Write Cache: Present 00:13:41.412 Atomic Write Unit (Normal): 1 00:13:41.412 Atomic Write Unit (PFail): 1 00:13:41.412 Atomic Compare & Write Unit: 1 00:13:41.412 Fused Compare & Write: Supported 00:13:41.412 Scatter-Gather List 00:13:41.412 SGL Command Set: Supported 00:13:41.412 SGL Keyed: Supported 00:13:41.412 SGL Bit Bucket Descriptor: Not Supported 00:13:41.412 SGL Metadata Pointer: Not Supported 00:13:41.412 Oversized SGL: Not Supported 00:13:41.412 SGL Metadata Address: Not Supported 00:13:41.412 SGL Offset: Supported 00:13:41.412 Transport SGL Data Block: Not Supported 00:13:41.412 Replay Protected Memory Block: Not Supported 00:13:41.412 00:13:41.412 Firmware Slot Information 00:13:41.412 ========================= 00:13:41.412 Active slot: 1 00:13:41.412 Slot 1 Firmware Revision: 24.01.1 00:13:41.412 00:13:41.412 00:13:41.412 Commands Supported and Effects 00:13:41.412 ============================== 00:13:41.412 Admin Commands 00:13:41.412 -------------- 00:13:41.412 Get Log Page (02h): Supported 00:13:41.412 Identify (06h): Supported 00:13:41.412 Abort (08h): Supported 00:13:41.412 Set Features (09h): Supported 00:13:41.412 Get Features (0Ah): Supported 00:13:41.412 Asynchronous Event Request (0Ch): Supported 00:13:41.412 Keep Alive (18h): Supported 00:13:41.412 I/O Commands 00:13:41.412 ------------ 00:13:41.412 Flush (00h): Supported LBA-Change 00:13:41.412 Write (01h): Supported LBA-Change 00:13:41.412 Read (02h): Supported 00:13:41.412 Compare (05h): Supported 00:13:41.412 Write Zeroes (08h): Supported LBA-Change 00:13:41.412 Dataset Management (09h): Supported LBA-Change 00:13:41.412 Copy (19h): Supported LBA-Change 00:13:41.412 Unknown (79h): Supported LBA-Change 00:13:41.412 Unknown (7Ah): Supported 00:13:41.412 00:13:41.412 Error Log 00:13:41.412 ========= 00:13:41.412 00:13:41.412 Arbitration 00:13:41.412 =========== 00:13:41.412 Arbitration Burst: 1 00:13:41.412 00:13:41.412 Power Management 00:13:41.412 ================ 00:13:41.412 Number of Power States: 1 00:13:41.412 Current Power State: Power State #0 00:13:41.412 Power State #0: 00:13:41.412 Max Power: 0.00 W 00:13:41.412 Non-Operational State: Operational 00:13:41.412 Entry Latency: Not Reported 00:13:41.412 Exit Latency: Not Reported 00:13:41.413 Relative Read Throughput: 0 00:13:41.413 Relative Read Latency: 0 00:13:41.413 Relative Write Throughput: 0 00:13:41.413 Relative Write Latency: 0 00:13:41.413 Idle Power: Not Reported 00:13:41.413 Active Power: Not Reported 00:13:41.413 Non-Operational Permissive Mode: Not Supported 00:13:41.413 00:13:41.413 Health Information 00:13:41.413 ================== 00:13:41.413 Critical Warnings: 00:13:41.413 Available Spare Space: OK 00:13:41.413 Temperature: OK 00:13:41.413 Device Reliability: OK 00:13:41.413 Read Only: No 00:13:41.413 Volatile Memory Backup: OK 00:13:41.413 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:41.413 Temperature Threshold: [2024-12-14 06:43:55.279472] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.413 [2024-12-14 06:43:55.279479] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.413 [2024-12-14 06:43:55.279483] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1030d30) 00:13:41.413 [2024-12-14 06:43:55.279491] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.413 [2024-12-14 06:43:55.279516] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108f8d0, cid 7, qid 0 00:13:41.413 [2024-12-14 06:43:55.279859] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.413 [2024-12-14 06:43:55.279875] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.413 [2024-12-14 06:43:55.284018] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.413 [2024-12-14 06:43:55.284029] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108f8d0) on tqpair=0x1030d30 00:13:41.413 [2024-12-14 06:43:55.284070] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:13:41.413 [2024-12-14 06:43:55.284086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.413 [2024-12-14 06:43:55.284093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.413 [2024-12-14 06:43:55.284099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.413 [2024-12-14 06:43:55.284106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:41.413 [2024-12-14 06:43:55.284115] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.413 [2024-12-14 06:43:55.284120] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.413 [2024-12-14 06:43:55.284123] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1030d30) 00:13:41.413 [2024-12-14 06:43:55.284148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.413 [2024-12-14 06:43:55.284175] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108f350, cid 3, qid 0 00:13:41.413 [2024-12-14 06:43:55.284552] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.413 [2024-12-14 06:43:55.284565] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.413 [2024-12-14 06:43:55.284569] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.413 [2024-12-14 06:43:55.284573] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108f350) on tqpair=0x1030d30 00:13:41.413 [2024-12-14 06:43:55.284582] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.413 [2024-12-14 06:43:55.284586] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.413 [2024-12-14 06:43:55.284590] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1030d30) 00:13:41.413 [2024-12-14 06:43:55.284598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.413 [2024-12-14 06:43:55.284620] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108f350, cid 3, qid 0 00:13:41.413 [2024-12-14 06:43:55.284701] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.413 [2024-12-14 06:43:55.284708] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.413 [2024-12-14 06:43:55.284711] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.413 [2024-12-14 06:43:55.284715] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108f350) on tqpair=0x1030d30 00:13:41.413 [2024-12-14 06:43:55.284737] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:13:41.413 [2024-12-14 06:43:55.284742] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:13:41.413 [2024-12-14 06:43:55.284752] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.413 [2024-12-14 06:43:55.284756] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.413 [2024-12-14 06:43:55.284760] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1030d30) 00:13:41.413 [2024-12-14 06:43:55.284767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.413 [2024-12-14 06:43:55.284784] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108f350, cid 3, qid 0 00:13:41.413 [2024-12-14 06:43:55.284831] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.413 [2024-12-14 06:43:55.284842] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.413 [2024-12-14 06:43:55.284846] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.413 [2024-12-14 06:43:55.284866] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108f350) on tqpair=0x1030d30 00:13:41.413 [2024-12-14 06:43:55.284879] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.413 [2024-12-14 06:43:55.284884] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.413 [2024-12-14 06:43:55.284887] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1030d30) 00:13:41.413 [2024-12-14 06:43:55.284922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.413 [2024-12-14 06:43:55.284946] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108f350, cid 3, qid 0 00:13:41.413 [2024-12-14 06:43:55.285423] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.413 [2024-12-14 06:43:55.285436] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.413 [2024-12-14 06:43:55.285441] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.413 [2024-12-14 06:43:55.285462] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108f350) on tqpair=0x1030d30 00:13:41.413 [2024-12-14 06:43:55.285488] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.413 [2024-12-14 06:43:55.285493] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.413 [2024-12-14 06:43:55.285497] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1030d30) 00:13:41.413 [2024-12-14 06:43:55.285504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.413 [2024-12-14 06:43:55.285522] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108f350, cid 3, qid 0 00:13:41.413 [2024-12-14 06:43:55.285571] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.413 [2024-12-14 06:43:55.285577] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.413 [2024-12-14 06:43:55.285581] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.413 [2024-12-14 06:43:55.285585] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108f350) on tqpair=0x1030d30 00:13:41.413 [2024-12-14 06:43:55.285596] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.413 [2024-12-14 06:43:55.285616] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.413 [2024-12-14 06:43:55.285620] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1030d30) 00:13:41.413 [2024-12-14 06:43:55.285628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.413 [2024-12-14 06:43:55.285644] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108f350, cid 3, qid 0 00:13:41.413 [2024-12-14 06:43:55.285859] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.413 [2024-12-14 06:43:55.285873] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.413 [2024-12-14 06:43:55.285877] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.413 [2024-12-14 06:43:55.285906] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108f350) on tqpair=0x1030d30 00:13:41.413 [2024-12-14 06:43:55.285920] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.413 [2024-12-14 06:43:55.285925] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.413 [2024-12-14 06:43:55.285929] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1030d30) 00:13:41.413 [2024-12-14 06:43:55.285936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.413 [2024-12-14 06:43:55.285956] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108f350, cid 3, qid 0 00:13:41.413 [2024-12-14 06:43:55.286268] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.413 [2024-12-14 06:43:55.286281] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.413 [2024-12-14 06:43:55.286286] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.413 [2024-12-14 06:43:55.286290] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108f350) on tqpair=0x1030d30 00:13:41.413 [2024-12-14 06:43:55.286317] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.414 [2024-12-14 06:43:55.286321] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.414 [2024-12-14 06:43:55.286325] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1030d30) 00:13:41.414 [2024-12-14 06:43:55.286332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.414 [2024-12-14 06:43:55.286349] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108f350, cid 3, qid 0 00:13:41.414 [2024-12-14 06:43:55.286605] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.414 [2024-12-14 06:43:55.286618] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.414 [2024-12-14 06:43:55.286622] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.414 [2024-12-14 06:43:55.286626] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108f350) on tqpair=0x1030d30 00:13:41.414 [2024-12-14 06:43:55.286637] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.414 [2024-12-14 06:43:55.286642] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.414 [2024-12-14 06:43:55.286645] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1030d30) 00:13:41.414 [2024-12-14 06:43:55.286653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.414 [2024-12-14 06:43:55.286669] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108f350, cid 3, qid 0 00:13:41.414 [2024-12-14 06:43:55.287090] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.414 [2024-12-14 06:43:55.287106] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.414 [2024-12-14 06:43:55.287110] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.414 [2024-12-14 06:43:55.287115] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108f350) on tqpair=0x1030d30 00:13:41.414 [2024-12-14 06:43:55.287128] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.414 [2024-12-14 06:43:55.287133] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.414 [2024-12-14 06:43:55.287137] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1030d30) 00:13:41.414 [2024-12-14 06:43:55.287145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.414 [2024-12-14 06:43:55.287165] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108f350, cid 3, qid 0 00:13:41.414 [2024-12-14 06:43:55.287440] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.414 [2024-12-14 06:43:55.287452] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.414 [2024-12-14 06:43:55.287457] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.414 [2024-12-14 06:43:55.287461] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108f350) on tqpair=0x1030d30 00:13:41.414 [2024-12-14 06:43:55.287472] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.414 [2024-12-14 06:43:55.287476] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.414 [2024-12-14 06:43:55.287480] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1030d30) 00:13:41.414 [2024-12-14 06:43:55.287487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.414 [2024-12-14 06:43:55.287504] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108f350, cid 3, qid 0 00:13:41.414 [2024-12-14 06:43:55.287768] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.414 [2024-12-14 06:43:55.287780] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.414 [2024-12-14 06:43:55.287785] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.414 [2024-12-14 06:43:55.287789] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108f350) on tqpair=0x1030d30 00:13:41.414 [2024-12-14 06:43:55.287799] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.414 [2024-12-14 06:43:55.287804] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.414 [2024-12-14 06:43:55.287807] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1030d30) 00:13:41.414 [2024-12-14 06:43:55.287815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.414 [2024-12-14 06:43:55.287831] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108f350, cid 3, qid 0 00:13:41.414 [2024-12-14 06:43:55.291993] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.414 [2024-12-14 06:43:55.292013] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.414 [2024-12-14 06:43:55.292018] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.414 [2024-12-14 06:43:55.292022] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108f350) on tqpair=0x1030d30 00:13:41.414 [2024-12-14 06:43:55.292036] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:13:41.414 [2024-12-14 06:43:55.292041] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:13:41.414 [2024-12-14 06:43:55.292045] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1030d30) 00:13:41.414 [2024-12-14 06:43:55.292053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:41.414 [2024-12-14 06:43:55.292077] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x108f350, cid 3, qid 0 00:13:41.414 [2024-12-14 06:43:55.292381] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:13:41.414 [2024-12-14 06:43:55.292395] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:13:41.414 [2024-12-14 06:43:55.292400] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:13:41.414 [2024-12-14 06:43:55.292404] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x108f350) on tqpair=0x1030d30 00:13:41.414 [2024-12-14 06:43:55.292413] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:13:41.414 0 Kelvin (-273 Celsius) 00:13:41.414 Available Spare: 0% 00:13:41.414 Available Spare Threshold: 0% 00:13:41.414 Life Percentage Used: 0% 00:13:41.414 Data Units Read: 0 00:13:41.414 Data Units Written: 0 00:13:41.414 Host Read Commands: 0 00:13:41.414 Host Write Commands: 0 00:13:41.414 Controller Busy Time: 0 minutes 00:13:41.414 Power Cycles: 0 00:13:41.414 Power On Hours: 0 hours 00:13:41.414 Unsafe Shutdowns: 0 00:13:41.414 Unrecoverable Media Errors: 0 00:13:41.414 Lifetime Error Log Entries: 0 00:13:41.414 Warning Temperature Time: 0 minutes 00:13:41.414 Critical Temperature Time: 0 minutes 00:13:41.414 00:13:41.414 Number of Queues 00:13:41.414 ================ 00:13:41.414 Number of I/O Submission Queues: 127 00:13:41.414 Number of I/O Completion Queues: 127 00:13:41.414 00:13:41.414 Active Namespaces 00:13:41.414 ================= 00:13:41.414 Namespace ID:1 00:13:41.414 Error Recovery Timeout: Unlimited 00:13:41.414 Command Set Identifier: NVM (00h) 00:13:41.414 Deallocate: Supported 00:13:41.414 Deallocated/Unwritten Error: Not Supported 00:13:41.414 Deallocated Read Value: Unknown 00:13:41.414 Deallocate in Write Zeroes: Not Supported 00:13:41.414 Deallocated Guard Field: 0xFFFF 00:13:41.414 Flush: Supported 00:13:41.414 Reservation: Supported 00:13:41.414 Namespace Sharing Capabilities: Multiple Controllers 00:13:41.414 Size (in LBAs): 131072 (0GiB) 00:13:41.414 Capacity (in LBAs): 131072 (0GiB) 00:13:41.414 Utilization (in LBAs): 131072 (0GiB) 00:13:41.414 NGUID: ABCDEF0123456789ABCDEF0123456789 00:13:41.414 EUI64: ABCDEF0123456789 00:13:41.414 UUID: 94beaa69-b065-4a59-92a9-242396233679 00:13:41.414 Thin Provisioning: Not Supported 00:13:41.414 Per-NS Atomic Units: Yes 00:13:41.414 Atomic Boundary Size (Normal): 0 00:13:41.414 Atomic Boundary Size (PFail): 0 00:13:41.414 Atomic Boundary Offset: 0 00:13:41.414 Maximum Single Source Range Length: 65535 00:13:41.414 Maximum Copy Length: 65535 00:13:41.414 Maximum Source Range Count: 1 00:13:41.414 NGUID/EUI64 Never Reused: No 00:13:41.414 Namespace Write Protected: No 00:13:41.414 Number of LBA Formats: 1 00:13:41.414 Current LBA Format: LBA Format #00 00:13:41.414 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:41.414 00:13:41.414 06:43:55 -- host/identify.sh@51 -- # sync 00:13:41.414 06:43:55 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:41.414 06:43:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.414 06:43:55 -- common/autotest_common.sh@10 -- # set +x 00:13:41.414 06:43:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.414 06:43:55 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:13:41.414 06:43:55 -- host/identify.sh@56 -- # nvmftestfini 00:13:41.414 06:43:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:41.414 06:43:55 -- nvmf/common.sh@116 -- # sync 00:13:41.414 06:43:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:41.414 06:43:55 -- nvmf/common.sh@119 -- # set +e 00:13:41.414 06:43:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:41.414 06:43:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:41.414 rmmod nvme_tcp 00:13:41.414 rmmod nvme_fabrics 00:13:41.414 rmmod nvme_keyring 00:13:41.672 06:43:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:41.672 06:43:55 -- nvmf/common.sh@123 -- # set -e 00:13:41.672 06:43:55 -- nvmf/common.sh@124 -- # return 0 00:13:41.672 06:43:55 -- nvmf/common.sh@477 -- # '[' -n 68462 ']' 00:13:41.672 06:43:55 -- nvmf/common.sh@478 -- # killprocess 68462 00:13:41.672 06:43:55 -- common/autotest_common.sh@936 -- # '[' -z 68462 ']' 00:13:41.672 06:43:55 -- common/autotest_common.sh@940 -- # kill -0 68462 00:13:41.672 06:43:55 -- common/autotest_common.sh@941 -- # uname 00:13:41.672 06:43:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:41.672 06:43:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68462 00:13:41.672 06:43:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:41.672 06:43:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:41.672 06:43:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68462' 00:13:41.672 killing process with pid 68462 00:13:41.672 06:43:55 -- common/autotest_common.sh@955 -- # kill 68462 00:13:41.672 [2024-12-14 06:43:55.449110] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:13:41.672 06:43:55 -- common/autotest_common.sh@960 -- # wait 68462 00:13:41.672 06:43:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:41.672 06:43:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:41.672 06:43:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:41.672 06:43:55 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:41.672 06:43:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:41.672 06:43:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.672 06:43:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:41.672 06:43:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.930 06:43:55 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:41.930 00:13:41.930 real 0m2.613s 00:13:41.930 user 0m7.245s 00:13:41.930 sys 0m0.609s 00:13:41.930 06:43:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:41.930 06:43:55 -- common/autotest_common.sh@10 -- # set +x 00:13:41.930 ************************************ 00:13:41.930 END TEST nvmf_identify 00:13:41.930 ************************************ 00:13:41.930 06:43:55 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:13:41.930 06:43:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:41.930 06:43:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:41.930 06:43:55 -- common/autotest_common.sh@10 -- # set +x 00:13:41.930 ************************************ 00:13:41.930 START TEST nvmf_perf 00:13:41.930 ************************************ 00:13:41.930 06:43:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:13:41.930 * Looking for test storage... 00:13:41.930 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:13:41.930 06:43:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:41.930 06:43:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:41.930 06:43:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:41.930 06:43:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:41.930 06:43:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:41.930 06:43:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:41.930 06:43:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:41.930 06:43:55 -- scripts/common.sh@335 -- # IFS=.-: 00:13:41.930 06:43:55 -- scripts/common.sh@335 -- # read -ra ver1 00:13:41.930 06:43:55 -- scripts/common.sh@336 -- # IFS=.-: 00:13:41.930 06:43:55 -- scripts/common.sh@336 -- # read -ra ver2 00:13:41.930 06:43:55 -- scripts/common.sh@337 -- # local 'op=<' 00:13:41.930 06:43:55 -- scripts/common.sh@339 -- # ver1_l=2 00:13:41.930 06:43:55 -- scripts/common.sh@340 -- # ver2_l=1 00:13:41.930 06:43:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:41.930 06:43:55 -- scripts/common.sh@343 -- # case "$op" in 00:13:41.930 06:43:55 -- scripts/common.sh@344 -- # : 1 00:13:41.930 06:43:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:41.930 06:43:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:41.930 06:43:55 -- scripts/common.sh@364 -- # decimal 1 00:13:41.930 06:43:55 -- scripts/common.sh@352 -- # local d=1 00:13:41.930 06:43:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:41.930 06:43:55 -- scripts/common.sh@354 -- # echo 1 00:13:41.930 06:43:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:41.930 06:43:55 -- scripts/common.sh@365 -- # decimal 2 00:13:41.930 06:43:55 -- scripts/common.sh@352 -- # local d=2 00:13:41.930 06:43:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:41.930 06:43:55 -- scripts/common.sh@354 -- # echo 2 00:13:41.930 06:43:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:41.930 06:43:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:41.930 06:43:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:41.930 06:43:55 -- scripts/common.sh@367 -- # return 0 00:13:41.930 06:43:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:41.930 06:43:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:41.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.930 --rc genhtml_branch_coverage=1 00:13:41.930 --rc genhtml_function_coverage=1 00:13:41.930 --rc genhtml_legend=1 00:13:41.930 --rc geninfo_all_blocks=1 00:13:41.930 --rc geninfo_unexecuted_blocks=1 00:13:41.930 00:13:41.930 ' 00:13:41.930 06:43:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:41.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.930 --rc genhtml_branch_coverage=1 00:13:41.930 --rc genhtml_function_coverage=1 00:13:41.930 --rc genhtml_legend=1 00:13:41.931 --rc geninfo_all_blocks=1 00:13:41.931 --rc geninfo_unexecuted_blocks=1 00:13:41.931 00:13:41.931 ' 00:13:41.931 06:43:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:41.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.931 --rc genhtml_branch_coverage=1 00:13:41.931 --rc genhtml_function_coverage=1 00:13:41.931 --rc genhtml_legend=1 00:13:41.931 --rc geninfo_all_blocks=1 00:13:41.931 --rc geninfo_unexecuted_blocks=1 00:13:41.931 00:13:41.931 ' 00:13:41.931 06:43:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:41.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:41.931 --rc genhtml_branch_coverage=1 00:13:41.931 --rc genhtml_function_coverage=1 00:13:41.931 --rc genhtml_legend=1 00:13:41.931 --rc geninfo_all_blocks=1 00:13:41.931 --rc geninfo_unexecuted_blocks=1 00:13:41.931 00:13:41.931 ' 00:13:41.931 06:43:55 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:41.931 06:43:55 -- nvmf/common.sh@7 -- # uname -s 00:13:41.931 06:43:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:41.931 06:43:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:41.931 06:43:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:41.931 06:43:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:41.931 06:43:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:41.931 06:43:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:41.931 06:43:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:41.931 06:43:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:41.931 06:43:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:41.931 06:43:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:41.931 06:43:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 00:13:41.931 06:43:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=1897a557-42a7-4044-982a-fbab8b2b3e32 00:13:41.931 06:43:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:41.931 06:43:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:41.931 06:43:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:41.931 06:43:55 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:42.189 06:43:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.189 06:43:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.189 06:43:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.189 06:43:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.189 06:43:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.189 06:43:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.189 06:43:55 -- paths/export.sh@5 -- # export PATH 00:13:42.189 06:43:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.189 06:43:55 -- nvmf/common.sh@46 -- # : 0 00:13:42.189 06:43:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:42.189 06:43:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:42.189 06:43:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:42.189 06:43:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.189 06:43:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.189 06:43:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:42.189 06:43:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:42.189 06:43:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:42.189 06:43:55 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:42.189 06:43:55 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:42.189 06:43:55 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:42.189 06:43:55 -- host/perf.sh@17 -- # nvmftestinit 00:13:42.189 06:43:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:42.189 06:43:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:42.189 06:43:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:42.189 06:43:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:42.189 06:43:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:42.189 06:43:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.189 06:43:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:42.189 06:43:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.189 06:43:55 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:42.189 06:43:55 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:42.189 06:43:55 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:42.189 06:43:55 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:42.189 06:43:55 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:42.189 06:43:55 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:42.189 06:43:55 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:42.189 06:43:55 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:42.189 06:43:55 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:42.189 06:43:55 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:42.189 06:43:55 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:42.189 06:43:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:42.189 06:43:55 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:42.189 06:43:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:42.189 06:43:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:42.189 06:43:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:42.189 06:43:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:42.189 06:43:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:42.189 06:43:55 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:42.189 06:43:55 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:42.189 Cannot find device "nvmf_tgt_br" 00:13:42.189 06:43:55 -- nvmf/common.sh@154 -- # true 00:13:42.189 06:43:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:42.189 Cannot find device "nvmf_tgt_br2" 00:13:42.189 06:43:55 -- nvmf/common.sh@155 -- # true 00:13:42.189 06:43:55 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:42.189 06:43:55 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:42.189 Cannot find device "nvmf_tgt_br" 00:13:42.189 06:43:55 -- nvmf/common.sh@157 -- # true 00:13:42.189 06:43:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:42.189 Cannot find device "nvmf_tgt_br2" 00:13:42.189 06:43:56 -- nvmf/common.sh@158 -- # true 00:13:42.189 06:43:56 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:42.190 06:43:56 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:42.190 06:43:56 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:42.190 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:42.190 06:43:56 -- nvmf/common.sh@161 -- # true 00:13:42.190 06:43:56 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:42.190 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:42.190 06:43:56 -- nvmf/common.sh@162 -- # true 00:13:42.190 06:43:56 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:42.190 06:43:56 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:42.190 06:43:56 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:42.190 06:43:56 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:42.190 06:43:56 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:42.190 06:43:56 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:42.190 06:43:56 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:42.190 06:43:56 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:42.190 06:43:56 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:42.190 06:43:56 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:42.190 06:43:56 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:42.190 06:43:56 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:42.190 06:43:56 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:42.190 06:43:56 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:42.190 06:43:56 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:42.190 06:43:56 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:42.190 06:43:56 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:42.190 06:43:56 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:42.448 06:43:56 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:42.448 06:43:56 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:42.448 06:43:56 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:42.448 06:43:56 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:42.448 06:43:56 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:42.448 06:43:56 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:42.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:42.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:13:42.448 00:13:42.448 --- 10.0.0.2 ping statistics --- 00:13:42.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.448 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:13:42.448 06:43:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:42.448 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:42.448 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:13:42.448 00:13:42.448 --- 10.0.0.3 ping statistics --- 00:13:42.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.448 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:13:42.448 06:43:56 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:42.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:42.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:13:42.448 00:13:42.448 --- 10.0.0.1 ping statistics --- 00:13:42.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.448 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:13:42.448 06:43:56 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:42.448 06:43:56 -- nvmf/common.sh@421 -- # return 0 00:13:42.448 06:43:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:42.448 06:43:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:42.448 06:43:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:42.448 06:43:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:42.448 06:43:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:42.448 06:43:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:42.448 06:43:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:42.448 06:43:56 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:13:42.448 06:43:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:42.448 06:43:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:42.448 06:43:56 -- common/autotest_common.sh@10 -- # set +x 00:13:42.448 06:43:56 -- nvmf/common.sh@469 -- # nvmfpid=68673 00:13:42.448 06:43:56 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:42.448 06:43:56 -- nvmf/common.sh@470 -- # waitforlisten 68673 00:13:42.448 06:43:56 -- common/autotest_common.sh@829 -- # '[' -z 68673 ']' 00:13:42.448 06:43:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.448 06:43:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:42.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.448 06:43:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.448 06:43:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:42.448 06:43:56 -- common/autotest_common.sh@10 -- # set +x 00:13:42.448 [2024-12-14 06:43:56.331605] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:42.448 [2024-12-14 06:43:56.331719] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.707 [2024-12-14 06:43:56.473686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:42.707 [2024-12-14 06:43:56.541800] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:42.707 [2024-12-14 06:43:56.541990] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.707 [2024-12-14 06:43:56.542008] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.707 [2024-12-14 06:43:56.542018] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.707 [2024-12-14 06:43:56.542376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.707 [2024-12-14 06:43:56.542761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:42.707 [2024-12-14 06:43:56.542903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:42.707 [2024-12-14 06:43:56.542903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.642 06:43:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:43.642 06:43:57 -- common/autotest_common.sh@862 -- # return 0 00:13:43.642 06:43:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:43.642 06:43:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:43.642 06:43:57 -- common/autotest_common.sh@10 -- # set +x 00:13:43.642 06:43:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.642 06:43:57 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:43.642 06:43:57 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:13:43.901 06:43:57 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:13:43.901 06:43:57 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:13:44.160 06:43:58 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:13:44.160 06:43:58 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:44.418 06:43:58 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:13:44.419 06:43:58 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:13:44.419 06:43:58 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:13:44.419 06:43:58 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:13:44.419 06:43:58 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:44.677 [2024-12-14 06:43:58.534076] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.677 06:43:58 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:44.936 06:43:58 -- host/perf.sh@45 -- # for bdev in $bdevs 00:13:44.936 06:43:58 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:45.194 06:43:59 -- host/perf.sh@45 -- # for bdev in $bdevs 00:13:45.194 06:43:59 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:13:45.453 06:43:59 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:45.712 [2024-12-14 06:43:59.487425] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.712 06:43:59 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:45.971 06:43:59 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:13:45.971 06:43:59 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:13:45.971 06:43:59 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:13:45.971 06:43:59 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:13:46.905 Initializing NVMe Controllers 00:13:46.905 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:13:46.905 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:13:46.905 Initialization complete. Launching workers. 00:13:46.905 ======================================================== 00:13:46.905 Latency(us) 00:13:46.905 Device Information : IOPS MiB/s Average min max 00:13:46.905 PCIE (0000:00:06.0) NSID 1 from core 0: 22815.98 89.12 1402.25 323.58 8925.87 00:13:46.905 ======================================================== 00:13:46.905 Total : 22815.98 89.12 1402.25 323.58 8925.87 00:13:46.905 00:13:46.905 06:44:00 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:48.278 Initializing NVMe Controllers 00:13:48.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:48.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:48.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:48.278 Initialization complete. Launching workers. 00:13:48.278 ======================================================== 00:13:48.278 Latency(us) 00:13:48.278 Device Information : IOPS MiB/s Average min max 00:13:48.278 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3596.93 14.05 277.73 103.90 4268.61 00:13:48.278 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8112.34 5956.09 12007.80 00:13:48.278 ======================================================== 00:13:48.278 Total : 3720.93 14.53 538.81 103.90 12007.80 00:13:48.278 00:13:48.278 06:44:02 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:49.655 Initializing NVMe Controllers 00:13:49.655 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:49.655 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:49.655 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:49.655 Initialization complete. Launching workers. 00:13:49.655 ======================================================== 00:13:49.655 Latency(us) 00:13:49.655 Device Information : IOPS MiB/s Average min max 00:13:49.655 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8887.57 34.72 3601.19 518.71 10768.18 00:13:49.655 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3949.04 15.43 8160.01 5198.58 16227.42 00:13:49.655 ======================================================== 00:13:49.655 Total : 12836.61 50.14 5003.66 518.71 16227.42 00:13:49.655 00:13:49.655 06:44:03 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:13:49.655 06:44:03 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:52.188 Initializing NVMe Controllers 00:13:52.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:52.188 Controller IO queue size 128, less than required. 00:13:52.188 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:52.188 Controller IO queue size 128, less than required. 00:13:52.188 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:52.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:52.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:52.188 Initialization complete. Launching workers. 00:13:52.188 ======================================================== 00:13:52.188 Latency(us) 00:13:52.188 Device Information : IOPS MiB/s Average min max 00:13:52.188 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1938.94 484.73 66645.05 33152.16 119960.75 00:13:52.188 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 602.48 150.62 216151.95 112345.33 337348.12 00:13:52.188 ======================================================== 00:13:52.188 Total : 2541.42 635.35 102087.86 33152.16 337348.12 00:13:52.188 00:13:52.188 06:44:06 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:13:52.446 No valid NVMe controllers or AIO or URING devices found 00:13:52.446 Initializing NVMe Controllers 00:13:52.446 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:52.446 Controller IO queue size 128, less than required. 00:13:52.446 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:52.446 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:13:52.446 Controller IO queue size 128, less than required. 00:13:52.446 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:52.446 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:13:52.446 WARNING: Some requested NVMe devices were skipped 00:13:52.446 06:44:06 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:13:55.000 Initializing NVMe Controllers 00:13:55.000 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:55.000 Controller IO queue size 128, less than required. 00:13:55.000 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:55.000 Controller IO queue size 128, less than required. 00:13:55.000 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:55.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:55.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:55.000 Initialization complete. Launching workers. 00:13:55.000 00:13:55.000 ==================== 00:13:55.000 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:13:55.000 TCP transport: 00:13:55.000 polls: 8276 00:13:55.000 idle_polls: 0 00:13:55.000 sock_completions: 8276 00:13:55.000 nvme_completions: 6765 00:13:55.000 submitted_requests: 10363 00:13:55.000 queued_requests: 1 00:13:55.000 00:13:55.000 ==================== 00:13:55.000 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:13:55.000 TCP transport: 00:13:55.000 polls: 8362 00:13:55.000 idle_polls: 0 00:13:55.000 sock_completions: 8362 00:13:55.000 nvme_completions: 6416 00:13:55.000 submitted_requests: 9776 00:13:55.000 queued_requests: 1 00:13:55.000 ======================================================== 00:13:55.000 Latency(us) 00:13:55.000 Device Information : IOPS MiB/s Average min max 00:13:55.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1754.91 438.73 74497.89 39322.06 165473.44 00:13:55.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1667.42 416.85 77497.95 34774.67 149079.48 00:13:55.000 ======================================================== 00:13:55.000 Total : 3422.33 855.58 75959.57 34774.67 165473.44 00:13:55.000 00:13:55.000 06:44:08 -- host/perf.sh@66 -- # sync 00:13:55.000 06:44:08 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:55.259 06:44:09 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:13:55.259 06:44:09 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:13:55.259 06:44:09 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:13:55.518 06:44:09 -- host/perf.sh@72 -- # ls_guid=c8a44654-f20d-4192-9c6f-a675fdca69f5 00:13:55.518 06:44:09 -- host/perf.sh@73 -- # get_lvs_free_mb c8a44654-f20d-4192-9c6f-a675fdca69f5 00:13:55.518 06:44:09 -- common/autotest_common.sh@1353 -- # local lvs_uuid=c8a44654-f20d-4192-9c6f-a675fdca69f5 00:13:55.518 06:44:09 -- common/autotest_common.sh@1354 -- # local lvs_info 00:13:55.518 06:44:09 -- common/autotest_common.sh@1355 -- # local fc 00:13:55.518 06:44:09 -- common/autotest_common.sh@1356 -- # local cs 00:13:55.518 06:44:09 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:13:55.777 06:44:09 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:13:55.777 { 00:13:55.777 "uuid": "c8a44654-f20d-4192-9c6f-a675fdca69f5", 00:13:55.777 "name": "lvs_0", 00:13:55.777 "base_bdev": "Nvme0n1", 00:13:55.777 "total_data_clusters": 1278, 00:13:55.777 "free_clusters": 1278, 00:13:55.777 "block_size": 4096, 00:13:55.777 "cluster_size": 4194304 00:13:55.777 } 00:13:55.777 ]' 00:13:55.777 06:44:09 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="c8a44654-f20d-4192-9c6f-a675fdca69f5") .free_clusters' 00:13:55.777 06:44:09 -- common/autotest_common.sh@1358 -- # fc=1278 00:13:55.777 06:44:09 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="c8a44654-f20d-4192-9c6f-a675fdca69f5") .cluster_size' 00:13:55.777 06:44:09 -- common/autotest_common.sh@1359 -- # cs=4194304 00:13:55.777 06:44:09 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:13:55.777 06:44:09 -- common/autotest_common.sh@1363 -- # echo 5112 00:13:55.777 5112 00:13:55.777 06:44:09 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:13:55.777 06:44:09 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c8a44654-f20d-4192-9c6f-a675fdca69f5 lbd_0 5112 00:13:56.035 06:44:09 -- host/perf.sh@80 -- # lb_guid=55609687-178e-4151-90b5-cbf61856e19d 00:13:56.035 06:44:09 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 55609687-178e-4151-90b5-cbf61856e19d lvs_n_0 00:13:56.602 06:44:10 -- host/perf.sh@83 -- # ls_nested_guid=a55833a0-ea34-4918-b708-f7bbfa308ebb 00:13:56.602 06:44:10 -- host/perf.sh@84 -- # get_lvs_free_mb a55833a0-ea34-4918-b708-f7bbfa308ebb 00:13:56.602 06:44:10 -- common/autotest_common.sh@1353 -- # local lvs_uuid=a55833a0-ea34-4918-b708-f7bbfa308ebb 00:13:56.602 06:44:10 -- common/autotest_common.sh@1354 -- # local lvs_info 00:13:56.602 06:44:10 -- common/autotest_common.sh@1355 -- # local fc 00:13:56.602 06:44:10 -- common/autotest_common.sh@1356 -- # local cs 00:13:56.602 06:44:10 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:13:56.602 06:44:10 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:13:56.602 { 00:13:56.602 "uuid": "c8a44654-f20d-4192-9c6f-a675fdca69f5", 00:13:56.602 "name": "lvs_0", 00:13:56.602 "base_bdev": "Nvme0n1", 00:13:56.602 "total_data_clusters": 1278, 00:13:56.602 "free_clusters": 0, 00:13:56.602 "block_size": 4096, 00:13:56.602 "cluster_size": 4194304 00:13:56.602 }, 00:13:56.602 { 00:13:56.602 "uuid": "a55833a0-ea34-4918-b708-f7bbfa308ebb", 00:13:56.602 "name": "lvs_n_0", 00:13:56.602 "base_bdev": "55609687-178e-4151-90b5-cbf61856e19d", 00:13:56.602 "total_data_clusters": 1276, 00:13:56.602 "free_clusters": 1276, 00:13:56.602 "block_size": 4096, 00:13:56.602 "cluster_size": 4194304 00:13:56.602 } 00:13:56.602 ]' 00:13:56.602 06:44:10 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="a55833a0-ea34-4918-b708-f7bbfa308ebb") .free_clusters' 00:13:56.602 06:44:10 -- common/autotest_common.sh@1358 -- # fc=1276 00:13:56.602 06:44:10 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="a55833a0-ea34-4918-b708-f7bbfa308ebb") .cluster_size' 00:13:56.861 5104 00:13:56.861 06:44:10 -- common/autotest_common.sh@1359 -- # cs=4194304 00:13:56.861 06:44:10 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:13:56.861 06:44:10 -- common/autotest_common.sh@1363 -- # echo 5104 00:13:56.861 06:44:10 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:13:56.861 06:44:10 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a55833a0-ea34-4918-b708-f7bbfa308ebb lbd_nest_0 5104 00:13:57.119 06:44:10 -- host/perf.sh@88 -- # lb_nested_guid=eefdf48e-8047-4c18-8533-cba75286d427 00:13:57.119 06:44:10 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:57.377 06:44:11 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:13:57.377 06:44:11 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 eefdf48e-8047-4c18-8533-cba75286d427 00:13:57.636 06:44:11 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:57.894 06:44:11 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:13:57.894 06:44:11 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:13:57.894 06:44:11 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:13:57.894 06:44:11 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:13:57.894 06:44:11 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:13:58.153 No valid NVMe controllers or AIO or URING devices found 00:13:58.153 Initializing NVMe Controllers 00:13:58.153 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:58.153 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:13:58.153 WARNING: Some requested NVMe devices were skipped 00:13:58.153 06:44:11 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:13:58.153 06:44:12 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:10.377 Initializing NVMe Controllers 00:14:10.377 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:10.377 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:10.377 Initialization complete. Launching workers. 00:14:10.377 ======================================================== 00:14:10.377 Latency(us) 00:14:10.377 Device Information : IOPS MiB/s Average min max 00:14:10.377 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 970.71 121.34 1028.97 325.04 8389.52 00:14:10.377 ======================================================== 00:14:10.377 Total : 970.71 121.34 1028.97 325.04 8389.52 00:14:10.377 00:14:10.377 06:44:22 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:14:10.377 06:44:22 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:10.377 06:44:22 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:10.377 No valid NVMe controllers or AIO or URING devices found 00:14:10.377 Initializing NVMe Controllers 00:14:10.377 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:10.377 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:14:10.377 WARNING: Some requested NVMe devices were skipped 00:14:10.377 06:44:22 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:10.377 06:44:22 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:20.358 Initializing NVMe Controllers 00:14:20.358 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:20.358 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:20.358 Initialization complete. Launching workers. 00:14:20.358 ======================================================== 00:14:20.358 Latency(us) 00:14:20.358 Device Information : IOPS MiB/s Average min max 00:14:20.358 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1306.17 163.27 24511.13 6101.95 59490.31 00:14:20.358 ======================================================== 00:14:20.358 Total : 1306.17 163.27 24511.13 6101.95 59490.31 00:14:20.358 00:14:20.358 06:44:32 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:14:20.358 06:44:32 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:20.358 06:44:32 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:20.358 No valid NVMe controllers or AIO or URING devices found 00:14:20.358 Initializing NVMe Controllers 00:14:20.358 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:20.358 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:14:20.358 WARNING: Some requested NVMe devices were skipped 00:14:20.358 06:44:33 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:20.358 06:44:33 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:30.332 Initializing NVMe Controllers 00:14:30.332 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:30.332 Controller IO queue size 128, less than required. 00:14:30.332 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:30.332 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:30.332 Initialization complete. Launching workers. 00:14:30.332 ======================================================== 00:14:30.332 Latency(us) 00:14:30.332 Device Information : IOPS MiB/s Average min max 00:14:30.332 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4007.78 500.97 31998.88 10314.92 64578.69 00:14:30.332 ======================================================== 00:14:30.332 Total : 4007.78 500.97 31998.88 10314.92 64578.69 00:14:30.332 00:14:30.332 06:44:43 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:30.332 06:44:43 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete eefdf48e-8047-4c18-8533-cba75286d427 00:14:30.332 06:44:44 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:14:30.591 06:44:44 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 55609687-178e-4151-90b5-cbf61856e19d 00:14:30.850 06:44:44 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:14:31.109 06:44:44 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:31.109 06:44:44 -- host/perf.sh@114 -- # nvmftestfini 00:14:31.109 06:44:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:31.109 06:44:44 -- nvmf/common.sh@116 -- # sync 00:14:31.109 06:44:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:31.109 06:44:44 -- nvmf/common.sh@119 -- # set +e 00:14:31.109 06:44:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:31.109 06:44:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:31.109 rmmod nvme_tcp 00:14:31.109 rmmod nvme_fabrics 00:14:31.109 rmmod nvme_keyring 00:14:31.109 06:44:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:31.109 06:44:44 -- nvmf/common.sh@123 -- # set -e 00:14:31.109 06:44:44 -- nvmf/common.sh@124 -- # return 0 00:14:31.109 06:44:44 -- nvmf/common.sh@477 -- # '[' -n 68673 ']' 00:14:31.109 06:44:44 -- nvmf/common.sh@478 -- # killprocess 68673 00:14:31.109 06:44:44 -- common/autotest_common.sh@936 -- # '[' -z 68673 ']' 00:14:31.109 06:44:44 -- common/autotest_common.sh@940 -- # kill -0 68673 00:14:31.109 06:44:44 -- common/autotest_common.sh@941 -- # uname 00:14:31.109 06:44:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:31.109 06:44:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68673 00:14:31.109 killing process with pid 68673 00:14:31.109 06:44:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:31.109 06:44:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:31.109 06:44:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68673' 00:14:31.109 06:44:44 -- common/autotest_common.sh@955 -- # kill 68673 00:14:31.109 06:44:44 -- common/autotest_common.sh@960 -- # wait 68673 00:14:32.484 06:44:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:32.484 06:44:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:32.484 06:44:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:32.484 06:44:46 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:32.484 06:44:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:32.484 06:44:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.484 06:44:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:32.484 06:44:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.484 06:44:46 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:32.484 00:14:32.484 real 0m50.686s 00:14:32.484 user 3m11.090s 00:14:32.484 sys 0m12.618s 00:14:32.484 06:44:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:32.484 06:44:46 -- common/autotest_common.sh@10 -- # set +x 00:14:32.484 ************************************ 00:14:32.484 END TEST nvmf_perf 00:14:32.484 ************************************ 00:14:32.484 06:44:46 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:32.484 06:44:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:32.484 06:44:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:32.484 06:44:46 -- common/autotest_common.sh@10 -- # set +x 00:14:32.484 ************************************ 00:14:32.484 START TEST nvmf_fio_host 00:14:32.484 ************************************ 00:14:32.484 06:44:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:32.743 * Looking for test storage... 00:14:32.743 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:32.743 06:44:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:32.743 06:44:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:32.743 06:44:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:32.743 06:44:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:32.743 06:44:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:32.743 06:44:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:32.744 06:44:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:32.744 06:44:46 -- scripts/common.sh@335 -- # IFS=.-: 00:14:32.744 06:44:46 -- scripts/common.sh@335 -- # read -ra ver1 00:14:32.744 06:44:46 -- scripts/common.sh@336 -- # IFS=.-: 00:14:32.744 06:44:46 -- scripts/common.sh@336 -- # read -ra ver2 00:14:32.744 06:44:46 -- scripts/common.sh@337 -- # local 'op=<' 00:14:32.744 06:44:46 -- scripts/common.sh@339 -- # ver1_l=2 00:14:32.744 06:44:46 -- scripts/common.sh@340 -- # ver2_l=1 00:14:32.744 06:44:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:32.744 06:44:46 -- scripts/common.sh@343 -- # case "$op" in 00:14:32.744 06:44:46 -- scripts/common.sh@344 -- # : 1 00:14:32.744 06:44:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:32.744 06:44:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:32.744 06:44:46 -- scripts/common.sh@364 -- # decimal 1 00:14:32.744 06:44:46 -- scripts/common.sh@352 -- # local d=1 00:14:32.744 06:44:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:32.744 06:44:46 -- scripts/common.sh@354 -- # echo 1 00:14:32.744 06:44:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:32.744 06:44:46 -- scripts/common.sh@365 -- # decimal 2 00:14:32.744 06:44:46 -- scripts/common.sh@352 -- # local d=2 00:14:32.744 06:44:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:32.744 06:44:46 -- scripts/common.sh@354 -- # echo 2 00:14:32.744 06:44:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:32.744 06:44:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:32.744 06:44:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:32.744 06:44:46 -- scripts/common.sh@367 -- # return 0 00:14:32.744 06:44:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:32.744 06:44:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:32.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.744 --rc genhtml_branch_coverage=1 00:14:32.744 --rc genhtml_function_coverage=1 00:14:32.744 --rc genhtml_legend=1 00:14:32.744 --rc geninfo_all_blocks=1 00:14:32.744 --rc geninfo_unexecuted_blocks=1 00:14:32.744 00:14:32.744 ' 00:14:32.744 06:44:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:32.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.744 --rc genhtml_branch_coverage=1 00:14:32.744 --rc genhtml_function_coverage=1 00:14:32.744 --rc genhtml_legend=1 00:14:32.744 --rc geninfo_all_blocks=1 00:14:32.744 --rc geninfo_unexecuted_blocks=1 00:14:32.744 00:14:32.744 ' 00:14:32.744 06:44:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:32.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.744 --rc genhtml_branch_coverage=1 00:14:32.744 --rc genhtml_function_coverage=1 00:14:32.744 --rc genhtml_legend=1 00:14:32.744 --rc geninfo_all_blocks=1 00:14:32.744 --rc geninfo_unexecuted_blocks=1 00:14:32.744 00:14:32.744 ' 00:14:32.744 06:44:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:32.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:32.744 --rc genhtml_branch_coverage=1 00:14:32.744 --rc genhtml_function_coverage=1 00:14:32.744 --rc genhtml_legend=1 00:14:32.744 --rc geninfo_all_blocks=1 00:14:32.744 --rc geninfo_unexecuted_blocks=1 00:14:32.744 00:14:32.744 ' 00:14:32.744 06:44:46 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:32.744 06:44:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:32.744 06:44:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:32.744 06:44:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:32.744 06:44:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.744 06:44:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.744 06:44:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.744 06:44:46 -- paths/export.sh@5 -- # export PATH 00:14:32.744 06:44:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.744 06:44:46 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:32.744 06:44:46 -- nvmf/common.sh@7 -- # uname -s 00:14:32.744 06:44:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:32.744 06:44:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:32.744 06:44:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:32.744 06:44:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:32.744 06:44:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:32.744 06:44:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:32.744 06:44:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:32.744 06:44:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:32.744 06:44:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:32.744 06:44:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:32.744 06:44:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 00:14:32.744 06:44:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=1897a557-42a7-4044-982a-fbab8b2b3e32 00:14:32.744 06:44:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:32.744 06:44:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:32.744 06:44:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:32.744 06:44:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:32.744 06:44:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:32.744 06:44:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:32.744 06:44:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:32.744 06:44:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.744 06:44:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.744 06:44:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.744 06:44:46 -- paths/export.sh@5 -- # export PATH 00:14:32.744 06:44:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.744 06:44:46 -- nvmf/common.sh@46 -- # : 0 00:14:32.744 06:44:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:32.744 06:44:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:32.744 06:44:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:32.744 06:44:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:32.744 06:44:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:32.744 06:44:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:32.744 06:44:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:32.744 06:44:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:32.744 06:44:46 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:32.744 06:44:46 -- host/fio.sh@14 -- # nvmftestinit 00:14:32.744 06:44:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:32.744 06:44:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:32.744 06:44:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:32.744 06:44:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:32.744 06:44:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:32.744 06:44:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.744 06:44:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:32.744 06:44:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.744 06:44:46 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:32.744 06:44:46 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:32.744 06:44:46 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:32.744 06:44:46 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:32.744 06:44:46 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:32.744 06:44:46 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:32.745 06:44:46 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:32.745 06:44:46 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:32.745 06:44:46 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:32.745 06:44:46 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:32.745 06:44:46 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:32.745 06:44:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:32.745 06:44:46 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:32.745 06:44:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:32.745 06:44:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:32.745 06:44:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:32.745 06:44:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:32.745 06:44:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:32.745 06:44:46 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:32.745 06:44:46 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:32.745 Cannot find device "nvmf_tgt_br" 00:14:32.745 06:44:46 -- nvmf/common.sh@154 -- # true 00:14:32.745 06:44:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:33.004 Cannot find device "nvmf_tgt_br2" 00:14:33.004 06:44:46 -- nvmf/common.sh@155 -- # true 00:14:33.004 06:44:46 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:33.004 06:44:46 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:33.004 Cannot find device "nvmf_tgt_br" 00:14:33.004 06:44:46 -- nvmf/common.sh@157 -- # true 00:14:33.004 06:44:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:33.004 Cannot find device "nvmf_tgt_br2" 00:14:33.004 06:44:46 -- nvmf/common.sh@158 -- # true 00:14:33.004 06:44:46 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:33.004 06:44:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:33.005 06:44:46 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:33.005 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:33.005 06:44:46 -- nvmf/common.sh@161 -- # true 00:14:33.005 06:44:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:33.005 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:33.005 06:44:46 -- nvmf/common.sh@162 -- # true 00:14:33.005 06:44:46 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:33.005 06:44:46 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:33.005 06:44:46 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:33.005 06:44:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:33.005 06:44:46 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:33.005 06:44:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:33.005 06:44:46 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:33.005 06:44:46 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:33.005 06:44:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:33.005 06:44:46 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:33.005 06:44:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:33.005 06:44:46 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:33.005 06:44:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:33.005 06:44:46 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:33.005 06:44:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:33.005 06:44:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:33.005 06:44:46 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:33.005 06:44:46 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:33.005 06:44:46 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:33.005 06:44:46 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:33.005 06:44:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:33.267 06:44:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:33.267 06:44:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:33.267 06:44:47 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:33.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:33.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:14:33.267 00:14:33.267 --- 10.0.0.2 ping statistics --- 00:14:33.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.267 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:14:33.267 06:44:47 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:33.267 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:33.267 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:14:33.267 00:14:33.267 --- 10.0.0.3 ping statistics --- 00:14:33.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.267 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:14:33.267 06:44:47 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:33.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:33.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:14:33.267 00:14:33.267 --- 10.0.0.1 ping statistics --- 00:14:33.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.267 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:14:33.267 06:44:47 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:33.267 06:44:47 -- nvmf/common.sh@421 -- # return 0 00:14:33.267 06:44:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:33.267 06:44:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:33.267 06:44:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:33.267 06:44:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:33.267 06:44:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:33.267 06:44:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:33.267 06:44:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:33.267 06:44:47 -- host/fio.sh@16 -- # [[ y != y ]] 00:14:33.267 06:44:47 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:14:33.267 06:44:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:33.267 06:44:47 -- common/autotest_common.sh@10 -- # set +x 00:14:33.267 06:44:47 -- host/fio.sh@24 -- # nvmfpid=69504 00:14:33.267 06:44:47 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:33.267 06:44:47 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:33.267 06:44:47 -- host/fio.sh@28 -- # waitforlisten 69504 00:14:33.267 06:44:47 -- common/autotest_common.sh@829 -- # '[' -z 69504 ']' 00:14:33.267 06:44:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.267 06:44:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:33.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.267 06:44:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.267 06:44:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:33.267 06:44:47 -- common/autotest_common.sh@10 -- # set +x 00:14:33.267 [2024-12-14 06:44:47.097900] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:33.267 [2024-12-14 06:44:47.098545] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.267 [2024-12-14 06:44:47.235057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:33.526 [2024-12-14 06:44:47.290876] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:33.526 [2024-12-14 06:44:47.291218] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:33.526 [2024-12-14 06:44:47.291380] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:33.526 [2024-12-14 06:44:47.291491] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:33.526 [2024-12-14 06:44:47.291739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.526 [2024-12-14 06:44:47.291809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:33.526 [2024-12-14 06:44:47.291921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:33.526 [2024-12-14 06:44:47.291923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.460 06:44:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:34.460 06:44:48 -- common/autotest_common.sh@862 -- # return 0 00:14:34.460 06:44:48 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:34.460 [2024-12-14 06:44:48.378719] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:34.460 06:44:48 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:14:34.460 06:44:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:34.460 06:44:48 -- common/autotest_common.sh@10 -- # set +x 00:14:34.719 06:44:48 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:34.977 Malloc1 00:14:34.977 06:44:48 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:35.236 06:44:49 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:35.494 06:44:49 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:35.752 [2024-12-14 06:44:49.487019] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:35.752 06:44:49 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:36.009 06:44:49 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:14:36.009 06:44:49 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:36.009 06:44:49 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:36.009 06:44:49 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:36.009 06:44:49 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:36.009 06:44:49 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:36.009 06:44:49 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:36.009 06:44:49 -- common/autotest_common.sh@1330 -- # shift 00:14:36.009 06:44:49 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:36.009 06:44:49 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:36.009 06:44:49 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:36.009 06:44:49 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:36.009 06:44:49 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:36.009 06:44:49 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:36.009 06:44:49 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:36.009 06:44:49 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:36.009 06:44:49 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:14:36.009 06:44:49 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:36.009 06:44:49 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:36.009 06:44:49 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:36.009 06:44:49 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:36.009 06:44:49 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:36.009 06:44:49 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:36.009 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:36.009 fio-3.35 00:14:36.009 Starting 1 thread 00:14:38.544 00:14:38.544 test: (groupid=0, jobs=1): err= 0: pid=69588: Sat Dec 14 06:44:52 2024 00:14:38.544 read: IOPS=9345, BW=36.5MiB/s (38.3MB/s)(73.3MiB/2007msec) 00:14:38.544 slat (nsec): min=1921, max=314514, avg=2503.20, stdev=3411.64 00:14:38.544 clat (usec): min=2602, max=13852, avg=7125.31, stdev=556.47 00:14:38.544 lat (usec): min=2638, max=13854, avg=7127.81, stdev=556.33 00:14:38.544 clat percentiles (usec): 00:14:38.544 | 1.00th=[ 5932], 5.00th=[ 6325], 10.00th=[ 6521], 20.00th=[ 6718], 00:14:38.544 | 30.00th=[ 6849], 40.00th=[ 6980], 50.00th=[ 7111], 60.00th=[ 7242], 00:14:38.544 | 70.00th=[ 7373], 80.00th=[ 7504], 90.00th=[ 7767], 95.00th=[ 7963], 00:14:38.544 | 99.00th=[ 8455], 99.50th=[ 8717], 99.90th=[11600], 99.95th=[13304], 00:14:38.544 | 99.99th=[13829] 00:14:38.544 bw ( KiB/s): min=36312, max=37896, per=100.00%, avg=37384.00, stdev=725.37, samples=4 00:14:38.544 iops : min= 9078, max= 9474, avg=9346.00, stdev=181.34, samples=4 00:14:38.544 write: IOPS=9351, BW=36.5MiB/s (38.3MB/s)(73.3MiB/2007msec); 0 zone resets 00:14:38.544 slat (nsec): min=1940, max=278515, avg=2584.81, stdev=2719.23 00:14:38.544 clat (usec): min=2465, max=13484, avg=6512.97, stdev=515.47 00:14:38.544 lat (usec): min=2479, max=13487, avg=6515.56, stdev=515.47 00:14:38.544 clat percentiles (usec): 00:14:38.544 | 1.00th=[ 5407], 5.00th=[ 5800], 10.00th=[ 5997], 20.00th=[ 6128], 00:14:38.544 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6521], 60.00th=[ 6587], 00:14:38.544 | 70.00th=[ 6718], 80.00th=[ 6849], 90.00th=[ 7046], 95.00th=[ 7242], 00:14:38.544 | 99.00th=[ 7701], 99.50th=[ 8225], 99.90th=[11469], 99.95th=[12518], 00:14:38.544 | 99.99th=[13435] 00:14:38.544 bw ( KiB/s): min=37176, max=37640, per=100.00%, avg=37410.00, stdev=261.06, samples=4 00:14:38.544 iops : min= 9294, max= 9410, avg=9352.50, stdev=65.27, samples=4 00:14:38.544 lat (msec) : 4=0.07%, 10=99.75%, 20=0.17% 00:14:38.544 cpu : usr=70.14%, sys=21.64%, ctx=7, majf=0, minf=5 00:14:38.544 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:38.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:38.544 issued rwts: total=18756,18768,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:38.544 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:38.544 00:14:38.544 Run status group 0 (all jobs): 00:14:38.544 READ: bw=36.5MiB/s (38.3MB/s), 36.5MiB/s-36.5MiB/s (38.3MB/s-38.3MB/s), io=73.3MiB (76.8MB), run=2007-2007msec 00:14:38.544 WRITE: bw=36.5MiB/s (38.3MB/s), 36.5MiB/s-36.5MiB/s (38.3MB/s-38.3MB/s), io=73.3MiB (76.9MB), run=2007-2007msec 00:14:38.544 06:44:52 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:38.544 06:44:52 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:38.544 06:44:52 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:38.544 06:44:52 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:38.544 06:44:52 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:38.544 06:44:52 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:38.544 06:44:52 -- common/autotest_common.sh@1330 -- # shift 00:14:38.544 06:44:52 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:38.544 06:44:52 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:38.544 06:44:52 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:38.544 06:44:52 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:38.544 06:44:52 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:38.544 06:44:52 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:38.544 06:44:52 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:38.544 06:44:52 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:38.544 06:44:52 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:14:38.544 06:44:52 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:38.544 06:44:52 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:38.544 06:44:52 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:38.544 06:44:52 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:38.544 06:44:52 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:38.544 06:44:52 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:14:38.544 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:14:38.544 fio-3.35 00:14:38.544 Starting 1 thread 00:14:41.079 00:14:41.079 test: (groupid=0, jobs=1): err= 0: pid=69637: Sat Dec 14 06:44:54 2024 00:14:41.079 read: IOPS=8707, BW=136MiB/s (143MB/s)(273MiB/2009msec) 00:14:41.079 slat (usec): min=2, max=132, avg= 3.76, stdev= 2.82 00:14:41.079 clat (usec): min=1885, max=17616, avg=8098.92, stdev=2626.09 00:14:41.079 lat (usec): min=1888, max=17620, avg=8102.68, stdev=2626.23 00:14:41.079 clat percentiles (usec): 00:14:41.079 | 1.00th=[ 4047], 5.00th=[ 4686], 10.00th=[ 5145], 20.00th=[ 5735], 00:14:41.079 | 30.00th=[ 6325], 40.00th=[ 7046], 50.00th=[ 7701], 60.00th=[ 8455], 00:14:41.079 | 70.00th=[ 9241], 80.00th=[10290], 90.00th=[11469], 95.00th=[12649], 00:14:41.079 | 99.00th=[16057], 99.50th=[16909], 99.90th=[17171], 99.95th=[17433], 00:14:41.079 | 99.99th=[17433] 00:14:41.079 bw ( KiB/s): min=63776, max=81280, per=51.92%, avg=72327.75, stdev=9826.19, samples=4 00:14:41.079 iops : min= 3986, max= 5080, avg=4520.25, stdev=613.88, samples=4 00:14:41.079 write: IOPS=5294, BW=82.7MiB/s (86.7MB/s)(148MiB/1787msec); 0 zone resets 00:14:41.079 slat (usec): min=32, max=346, avg=37.68, stdev= 9.45 00:14:41.079 clat (usec): min=5630, max=19565, avg=11389.31, stdev=1889.35 00:14:41.079 lat (usec): min=5665, max=19600, avg=11426.99, stdev=1889.31 00:14:41.079 clat percentiles (usec): 00:14:41.079 | 1.00th=[ 7832], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9765], 00:14:41.079 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11207], 60.00th=[11600], 00:14:41.079 | 70.00th=[12256], 80.00th=[12911], 90.00th=[13829], 95.00th=[14877], 00:14:41.079 | 99.00th=[16450], 99.50th=[17433], 99.90th=[18744], 99.95th=[19006], 00:14:41.079 | 99.99th=[19530] 00:14:41.079 bw ( KiB/s): min=65824, max=83520, per=88.80%, avg=75222.25, stdev=9466.87, samples=4 00:14:41.079 iops : min= 4114, max= 5220, avg=4701.25, stdev=591.52, samples=4 00:14:41.079 lat (msec) : 2=0.01%, 4=0.55%, 10=58.07%, 20=41.37% 00:14:41.079 cpu : usr=81.23%, sys=13.69%, ctx=7, majf=0, minf=10 00:14:41.079 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:14:41.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:41.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:41.079 issued rwts: total=17493,9461,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:41.079 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:41.079 00:14:41.079 Run status group 0 (all jobs): 00:14:41.079 READ: bw=136MiB/s (143MB/s), 136MiB/s-136MiB/s (143MB/s-143MB/s), io=273MiB (287MB), run=2009-2009msec 00:14:41.079 WRITE: bw=82.7MiB/s (86.7MB/s), 82.7MiB/s-82.7MiB/s (86.7MB/s-86.7MB/s), io=148MiB (155MB), run=1787-1787msec 00:14:41.079 06:44:54 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:41.079 06:44:54 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:14:41.079 06:44:54 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:14:41.079 06:44:54 -- host/fio.sh@51 -- # get_nvme_bdfs 00:14:41.079 06:44:54 -- common/autotest_common.sh@1508 -- # bdfs=() 00:14:41.079 06:44:54 -- common/autotest_common.sh@1508 -- # local bdfs 00:14:41.079 06:44:54 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:41.079 06:44:54 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:41.079 06:44:54 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:14:41.079 06:44:55 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:14:41.080 06:44:55 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:14:41.080 06:44:55 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:14:41.338 Nvme0n1 00:14:41.338 06:44:55 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:14:41.596 06:44:55 -- host/fio.sh@53 -- # ls_guid=5cbf4274-8664-4e12-81da-4876040d099c 00:14:41.596 06:44:55 -- host/fio.sh@54 -- # get_lvs_free_mb 5cbf4274-8664-4e12-81da-4876040d099c 00:14:41.596 06:44:55 -- common/autotest_common.sh@1353 -- # local lvs_uuid=5cbf4274-8664-4e12-81da-4876040d099c 00:14:41.596 06:44:55 -- common/autotest_common.sh@1354 -- # local lvs_info 00:14:41.596 06:44:55 -- common/autotest_common.sh@1355 -- # local fc 00:14:41.596 06:44:55 -- common/autotest_common.sh@1356 -- # local cs 00:14:41.596 06:44:55 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:41.854 06:44:55 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:14:41.854 { 00:14:41.854 "uuid": "5cbf4274-8664-4e12-81da-4876040d099c", 00:14:41.854 "name": "lvs_0", 00:14:41.854 "base_bdev": "Nvme0n1", 00:14:41.854 "total_data_clusters": 4, 00:14:41.854 "free_clusters": 4, 00:14:41.854 "block_size": 4096, 00:14:41.854 "cluster_size": 1073741824 00:14:41.854 } 00:14:41.854 ]' 00:14:41.854 06:44:55 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="5cbf4274-8664-4e12-81da-4876040d099c") .free_clusters' 00:14:42.112 06:44:55 -- common/autotest_common.sh@1358 -- # fc=4 00:14:42.112 06:44:55 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="5cbf4274-8664-4e12-81da-4876040d099c") .cluster_size' 00:14:42.112 4096 00:14:42.112 06:44:55 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:14:42.112 06:44:55 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:14:42.112 06:44:55 -- common/autotest_common.sh@1363 -- # echo 4096 00:14:42.112 06:44:55 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:14:42.371 7f3cf367-543d-4940-8117-8418f11cdd49 00:14:42.371 06:44:56 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:14:42.630 06:44:56 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:14:42.889 06:44:56 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:43.154 06:44:56 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:43.154 06:44:56 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:43.154 06:44:56 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:43.154 06:44:56 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:43.154 06:44:56 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:43.154 06:44:56 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:43.154 06:44:56 -- common/autotest_common.sh@1330 -- # shift 00:14:43.154 06:44:56 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:43.154 06:44:56 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:43.154 06:44:56 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:43.154 06:44:56 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:43.154 06:44:56 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:43.154 06:44:57 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:43.154 06:44:57 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:43.154 06:44:57 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:43.154 06:44:57 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:43.154 06:44:57 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:14:43.154 06:44:57 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:43.154 06:44:57 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:43.154 06:44:57 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:43.154 06:44:57 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:43.154 06:44:57 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:43.154 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:43.154 fio-3.35 00:14:43.154 Starting 1 thread 00:14:45.717 00:14:45.718 test: (groupid=0, jobs=1): err= 0: pid=69741: Sat Dec 14 06:44:59 2024 00:14:45.718 read: IOPS=6425, BW=25.1MiB/s (26.3MB/s)(50.4MiB/2008msec) 00:14:45.718 slat (usec): min=2, max=325, avg= 2.84, stdev= 3.88 00:14:45.718 clat (usec): min=3095, max=18040, avg=10399.01, stdev=874.07 00:14:45.718 lat (usec): min=3105, max=18042, avg=10401.85, stdev=873.74 00:14:45.718 clat percentiles (usec): 00:14:45.718 | 1.00th=[ 8455], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 00:14:45.718 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:14:45.718 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11469], 95.00th=[11731], 00:14:45.718 | 99.00th=[12387], 99.50th=[12780], 99.90th=[16057], 99.95th=[16712], 00:14:45.718 | 99.99th=[17957] 00:14:45.718 bw ( KiB/s): min=24736, max=26384, per=99.91%, avg=25680.00, stdev=734.93, samples=4 00:14:45.718 iops : min= 6184, max= 6596, avg=6420.00, stdev=183.73, samples=4 00:14:45.718 write: IOPS=6430, BW=25.1MiB/s (26.3MB/s)(50.4MiB/2008msec); 0 zone resets 00:14:45.718 slat (usec): min=2, max=275, avg= 3.00, stdev= 2.98 00:14:45.718 clat (usec): min=2506, max=16925, avg=9447.25, stdev=815.94 00:14:45.718 lat (usec): min=2520, max=16927, avg=9450.25, stdev=815.81 00:14:45.718 clat percentiles (usec): 00:14:45.718 | 1.00th=[ 7701], 5.00th=[ 8291], 10.00th=[ 8455], 20.00th=[ 8848], 00:14:45.718 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9372], 60.00th=[ 9634], 00:14:45.718 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10421], 95.00th=[10683], 00:14:45.718 | 99.00th=[11207], 99.50th=[11600], 99.90th=[15401], 99.95th=[15926], 00:14:45.718 | 99.99th=[16909] 00:14:45.718 bw ( KiB/s): min=25536, max=25928, per=99.90%, avg=25698.00, stdev=164.78, samples=4 00:14:45.718 iops : min= 6384, max= 6482, avg=6424.50, stdev=41.19, samples=4 00:14:45.718 lat (msec) : 4=0.05%, 10=53.90%, 20=46.05% 00:14:45.718 cpu : usr=71.80%, sys=21.43%, ctx=4, majf=0, minf=14 00:14:45.718 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:45.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:45.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:45.718 issued rwts: total=12903,12913,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:45.718 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:45.718 00:14:45.718 Run status group 0 (all jobs): 00:14:45.718 READ: bw=25.1MiB/s (26.3MB/s), 25.1MiB/s-25.1MiB/s (26.3MB/s-26.3MB/s), io=50.4MiB (52.8MB), run=2008-2008msec 00:14:45.718 WRITE: bw=25.1MiB/s (26.3MB/s), 25.1MiB/s-25.1MiB/s (26.3MB/s-26.3MB/s), io=50.4MiB (52.9MB), run=2008-2008msec 00:14:45.718 06:44:59 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:45.718 06:44:59 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:14:45.976 06:44:59 -- host/fio.sh@64 -- # ls_nested_guid=00b06260-03ec-45b9-9fab-792edbd0d05e 00:14:45.976 06:44:59 -- host/fio.sh@65 -- # get_lvs_free_mb 00b06260-03ec-45b9-9fab-792edbd0d05e 00:14:45.976 06:44:59 -- common/autotest_common.sh@1353 -- # local lvs_uuid=00b06260-03ec-45b9-9fab-792edbd0d05e 00:14:46.235 06:44:59 -- common/autotest_common.sh@1354 -- # local lvs_info 00:14:46.235 06:44:59 -- common/autotest_common.sh@1355 -- # local fc 00:14:46.235 06:44:59 -- common/autotest_common.sh@1356 -- # local cs 00:14:46.235 06:44:59 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:46.494 06:45:00 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:14:46.494 { 00:14:46.494 "uuid": "5cbf4274-8664-4e12-81da-4876040d099c", 00:14:46.494 "name": "lvs_0", 00:14:46.494 "base_bdev": "Nvme0n1", 00:14:46.494 "total_data_clusters": 4, 00:14:46.494 "free_clusters": 0, 00:14:46.494 "block_size": 4096, 00:14:46.494 "cluster_size": 1073741824 00:14:46.494 }, 00:14:46.494 { 00:14:46.494 "uuid": "00b06260-03ec-45b9-9fab-792edbd0d05e", 00:14:46.494 "name": "lvs_n_0", 00:14:46.494 "base_bdev": "7f3cf367-543d-4940-8117-8418f11cdd49", 00:14:46.494 "total_data_clusters": 1022, 00:14:46.494 "free_clusters": 1022, 00:14:46.494 "block_size": 4096, 00:14:46.494 "cluster_size": 4194304 00:14:46.494 } 00:14:46.494 ]' 00:14:46.494 06:45:00 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="00b06260-03ec-45b9-9fab-792edbd0d05e") .free_clusters' 00:14:46.494 06:45:00 -- common/autotest_common.sh@1358 -- # fc=1022 00:14:46.494 06:45:00 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="00b06260-03ec-45b9-9fab-792edbd0d05e") .cluster_size' 00:14:46.494 4088 00:14:46.494 06:45:00 -- common/autotest_common.sh@1359 -- # cs=4194304 00:14:46.494 06:45:00 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:14:46.494 06:45:00 -- common/autotest_common.sh@1363 -- # echo 4088 00:14:46.494 06:45:00 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:14:46.752 052aac6e-4dd6-4859-a493-3f23622a285e 00:14:46.752 06:45:00 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:14:47.011 06:45:00 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:14:47.269 06:45:01 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:14:47.527 06:45:01 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:47.527 06:45:01 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:47.527 06:45:01 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:14:47.527 06:45:01 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:47.527 06:45:01 -- common/autotest_common.sh@1328 -- # local sanitizers 00:14:47.527 06:45:01 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:47.527 06:45:01 -- common/autotest_common.sh@1330 -- # shift 00:14:47.527 06:45:01 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:14:47.527 06:45:01 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:47.527 06:45:01 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:47.527 06:45:01 -- common/autotest_common.sh@1334 -- # grep libasan 00:14:47.527 06:45:01 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:47.527 06:45:01 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:47.527 06:45:01 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:47.527 06:45:01 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:14:47.527 06:45:01 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:47.527 06:45:01 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:14:47.527 06:45:01 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:14:47.527 06:45:01 -- common/autotest_common.sh@1334 -- # asan_lib= 00:14:47.527 06:45:01 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:14:47.527 06:45:01 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:47.527 06:45:01 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:14:47.785 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:47.785 fio-3.35 00:14:47.785 Starting 1 thread 00:14:50.318 00:14:50.318 test: (groupid=0, jobs=1): err= 0: pid=69825: Sat Dec 14 06:45:03 2024 00:14:50.318 read: IOPS=5742, BW=22.4MiB/s (23.5MB/s)(45.1MiB/2010msec) 00:14:50.318 slat (nsec): min=1913, max=270057, avg=2531.12, stdev=3415.10 00:14:50.318 clat (usec): min=3133, max=20396, avg=11651.12, stdev=978.67 00:14:50.318 lat (usec): min=3142, max=20415, avg=11653.65, stdev=978.36 00:14:50.318 clat percentiles (usec): 00:14:50.318 | 1.00th=[ 9503], 5.00th=[10290], 10.00th=[10552], 20.00th=[10945], 00:14:50.318 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11600], 60.00th=[11863], 00:14:50.318 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12780], 95.00th=[13173], 00:14:50.318 | 99.00th=[13829], 99.50th=[14222], 99.90th=[18744], 99.95th=[19006], 00:14:50.318 | 99.99th=[20317] 00:14:50.318 bw ( KiB/s): min=22080, max=23456, per=99.95%, avg=22960.00, stdev=607.44, samples=4 00:14:50.318 iops : min= 5520, max= 5864, avg=5740.00, stdev=151.86, samples=4 00:14:50.318 write: IOPS=5731, BW=22.4MiB/s (23.5MB/s)(45.0MiB/2010msec); 0 zone resets 00:14:50.318 slat (nsec): min=1973, max=187359, avg=2605.22, stdev=2318.64 00:14:50.318 clat (usec): min=2022, max=20436, avg=10563.54, stdev=1041.75 00:14:50.318 lat (usec): min=2036, max=20438, avg=10566.14, stdev=1041.56 00:14:50.318 clat percentiles (usec): 00:14:50.318 | 1.00th=[ 8455], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9765], 00:14:50.318 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10814], 00:14:50.318 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11994], 00:14:50.318 | 99.00th=[13042], 99.50th=[16057], 99.90th=[18744], 99.95th=[20055], 00:14:50.318 | 99.99th=[20317] 00:14:50.318 bw ( KiB/s): min=22768, max=23048, per=99.94%, avg=22914.00, stdev=114.71, samples=4 00:14:50.318 iops : min= 5692, max= 5762, avg=5728.50, stdev=28.68, samples=4 00:14:50.318 lat (msec) : 4=0.06%, 10=14.67%, 20=85.22%, 50=0.05% 00:14:50.318 cpu : usr=75.26%, sys=19.81%, ctx=2, majf=0, minf=14 00:14:50.318 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:14:50.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:50.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:50.318 issued rwts: total=11543,11521,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:50.318 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:50.318 00:14:50.318 Run status group 0 (all jobs): 00:14:50.318 READ: bw=22.4MiB/s (23.5MB/s), 22.4MiB/s-22.4MiB/s (23.5MB/s-23.5MB/s), io=45.1MiB (47.3MB), run=2010-2010msec 00:14:50.318 WRITE: bw=22.4MiB/s (23.5MB/s), 22.4MiB/s-22.4MiB/s (23.5MB/s-23.5MB/s), io=45.0MiB (47.2MB), run=2010-2010msec 00:14:50.318 06:45:03 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:50.318 06:45:04 -- host/fio.sh@74 -- # sync 00:14:50.318 06:45:04 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:14:50.577 06:45:04 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:14:50.835 06:45:04 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:14:51.401 06:45:05 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:14:51.401 06:45:05 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:14:51.967 06:45:05 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:14:51.967 06:45:05 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:14:51.967 06:45:05 -- host/fio.sh@86 -- # nvmftestfini 00:14:51.967 06:45:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:51.967 06:45:05 -- nvmf/common.sh@116 -- # sync 00:14:51.967 06:45:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:51.967 06:45:05 -- nvmf/common.sh@119 -- # set +e 00:14:51.967 06:45:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:51.967 06:45:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:51.967 rmmod nvme_tcp 00:14:51.967 rmmod nvme_fabrics 00:14:51.967 rmmod nvme_keyring 00:14:51.967 06:45:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:51.967 06:45:05 -- nvmf/common.sh@123 -- # set -e 00:14:51.967 06:45:05 -- nvmf/common.sh@124 -- # return 0 00:14:51.967 06:45:05 -- nvmf/common.sh@477 -- # '[' -n 69504 ']' 00:14:51.967 06:45:05 -- nvmf/common.sh@478 -- # killprocess 69504 00:14:51.967 06:45:05 -- common/autotest_common.sh@936 -- # '[' -z 69504 ']' 00:14:51.967 06:45:05 -- common/autotest_common.sh@940 -- # kill -0 69504 00:14:51.967 06:45:05 -- common/autotest_common.sh@941 -- # uname 00:14:51.967 06:45:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:51.967 06:45:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69504 00:14:52.225 killing process with pid 69504 00:14:52.225 06:45:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:52.226 06:45:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:52.226 06:45:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69504' 00:14:52.226 06:45:05 -- common/autotest_common.sh@955 -- # kill 69504 00:14:52.226 06:45:05 -- common/autotest_common.sh@960 -- # wait 69504 00:14:52.226 06:45:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:52.226 06:45:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:52.226 06:45:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:52.226 06:45:06 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:52.226 06:45:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:52.226 06:45:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.226 06:45:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.226 06:45:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.226 06:45:06 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:52.226 00:14:52.226 real 0m19.723s 00:14:52.226 user 1m27.219s 00:14:52.226 sys 0m4.197s 00:14:52.226 06:45:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:52.226 ************************************ 00:14:52.226 06:45:06 -- common/autotest_common.sh@10 -- # set +x 00:14:52.226 END TEST nvmf_fio_host 00:14:52.226 ************************************ 00:14:52.486 06:45:06 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:52.486 06:45:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:52.486 06:45:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:52.486 06:45:06 -- common/autotest_common.sh@10 -- # set +x 00:14:52.486 ************************************ 00:14:52.486 START TEST nvmf_failover 00:14:52.486 ************************************ 00:14:52.486 06:45:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:52.486 * Looking for test storage... 00:14:52.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:52.486 06:45:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:52.486 06:45:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:52.486 06:45:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:52.486 06:45:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:52.486 06:45:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:52.486 06:45:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:52.486 06:45:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:52.486 06:45:06 -- scripts/common.sh@335 -- # IFS=.-: 00:14:52.486 06:45:06 -- scripts/common.sh@335 -- # read -ra ver1 00:14:52.486 06:45:06 -- scripts/common.sh@336 -- # IFS=.-: 00:14:52.486 06:45:06 -- scripts/common.sh@336 -- # read -ra ver2 00:14:52.486 06:45:06 -- scripts/common.sh@337 -- # local 'op=<' 00:14:52.486 06:45:06 -- scripts/common.sh@339 -- # ver1_l=2 00:14:52.486 06:45:06 -- scripts/common.sh@340 -- # ver2_l=1 00:14:52.486 06:45:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:52.486 06:45:06 -- scripts/common.sh@343 -- # case "$op" in 00:14:52.486 06:45:06 -- scripts/common.sh@344 -- # : 1 00:14:52.486 06:45:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:52.486 06:45:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:52.486 06:45:06 -- scripts/common.sh@364 -- # decimal 1 00:14:52.486 06:45:06 -- scripts/common.sh@352 -- # local d=1 00:14:52.486 06:45:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:52.486 06:45:06 -- scripts/common.sh@354 -- # echo 1 00:14:52.486 06:45:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:52.486 06:45:06 -- scripts/common.sh@365 -- # decimal 2 00:14:52.486 06:45:06 -- scripts/common.sh@352 -- # local d=2 00:14:52.486 06:45:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:52.486 06:45:06 -- scripts/common.sh@354 -- # echo 2 00:14:52.486 06:45:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:52.486 06:45:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:52.486 06:45:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:52.486 06:45:06 -- scripts/common.sh@367 -- # return 0 00:14:52.486 06:45:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:52.486 06:45:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:52.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.486 --rc genhtml_branch_coverage=1 00:14:52.486 --rc genhtml_function_coverage=1 00:14:52.486 --rc genhtml_legend=1 00:14:52.486 --rc geninfo_all_blocks=1 00:14:52.486 --rc geninfo_unexecuted_blocks=1 00:14:52.486 00:14:52.486 ' 00:14:52.486 06:45:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:52.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.486 --rc genhtml_branch_coverage=1 00:14:52.486 --rc genhtml_function_coverage=1 00:14:52.486 --rc genhtml_legend=1 00:14:52.486 --rc geninfo_all_blocks=1 00:14:52.486 --rc geninfo_unexecuted_blocks=1 00:14:52.486 00:14:52.486 ' 00:14:52.486 06:45:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:52.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.486 --rc genhtml_branch_coverage=1 00:14:52.486 --rc genhtml_function_coverage=1 00:14:52.486 --rc genhtml_legend=1 00:14:52.486 --rc geninfo_all_blocks=1 00:14:52.486 --rc geninfo_unexecuted_blocks=1 00:14:52.486 00:14:52.486 ' 00:14:52.486 06:45:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:52.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.486 --rc genhtml_branch_coverage=1 00:14:52.486 --rc genhtml_function_coverage=1 00:14:52.486 --rc genhtml_legend=1 00:14:52.486 --rc geninfo_all_blocks=1 00:14:52.486 --rc geninfo_unexecuted_blocks=1 00:14:52.486 00:14:52.486 ' 00:14:52.486 06:45:06 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:52.486 06:45:06 -- nvmf/common.sh@7 -- # uname -s 00:14:52.486 06:45:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.486 06:45:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.486 06:45:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.486 06:45:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.486 06:45:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.486 06:45:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.486 06:45:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.486 06:45:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.486 06:45:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.486 06:45:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.486 06:45:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 00:14:52.486 06:45:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=1897a557-42a7-4044-982a-fbab8b2b3e32 00:14:52.486 06:45:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.486 06:45:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.486 06:45:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:52.486 06:45:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:52.486 06:45:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.486 06:45:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.486 06:45:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.486 06:45:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.486 06:45:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.486 06:45:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.486 06:45:06 -- paths/export.sh@5 -- # export PATH 00:14:52.486 06:45:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.486 06:45:06 -- nvmf/common.sh@46 -- # : 0 00:14:52.486 06:45:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:52.486 06:45:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:52.486 06:45:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:52.486 06:45:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.486 06:45:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.486 06:45:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:52.486 06:45:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:52.486 06:45:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:52.486 06:45:06 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:52.486 06:45:06 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:52.486 06:45:06 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:52.486 06:45:06 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:52.486 06:45:06 -- host/failover.sh@18 -- # nvmftestinit 00:14:52.487 06:45:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:52.487 06:45:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.487 06:45:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:52.487 06:45:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:52.487 06:45:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:52.487 06:45:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.487 06:45:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.487 06:45:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.487 06:45:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:52.487 06:45:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:52.487 06:45:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:52.487 06:45:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:52.487 06:45:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:52.487 06:45:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:52.487 06:45:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:52.487 06:45:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:52.487 06:45:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:52.487 06:45:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:52.487 06:45:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:52.487 06:45:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:52.487 06:45:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:52.487 06:45:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:52.487 06:45:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:52.487 06:45:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:52.487 06:45:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:52.487 06:45:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:52.487 06:45:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:52.487 06:45:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:52.745 Cannot find device "nvmf_tgt_br" 00:14:52.745 06:45:06 -- nvmf/common.sh@154 -- # true 00:14:52.745 06:45:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:52.745 Cannot find device "nvmf_tgt_br2" 00:14:52.745 06:45:06 -- nvmf/common.sh@155 -- # true 00:14:52.745 06:45:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:52.745 06:45:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:52.745 Cannot find device "nvmf_tgt_br" 00:14:52.745 06:45:06 -- nvmf/common.sh@157 -- # true 00:14:52.745 06:45:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:52.745 Cannot find device "nvmf_tgt_br2" 00:14:52.745 06:45:06 -- nvmf/common.sh@158 -- # true 00:14:52.745 06:45:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:52.745 06:45:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:52.745 06:45:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:52.745 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:52.745 06:45:06 -- nvmf/common.sh@161 -- # true 00:14:52.745 06:45:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:52.745 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:52.745 06:45:06 -- nvmf/common.sh@162 -- # true 00:14:52.745 06:45:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:52.745 06:45:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:52.745 06:45:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:52.745 06:45:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:52.745 06:45:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:52.745 06:45:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:52.745 06:45:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:52.745 06:45:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:52.745 06:45:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:52.745 06:45:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:52.745 06:45:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:52.745 06:45:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:52.745 06:45:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:52.745 06:45:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:52.746 06:45:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:52.746 06:45:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:52.746 06:45:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:52.746 06:45:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:52.746 06:45:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:52.746 06:45:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:53.004 06:45:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:53.004 06:45:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:53.004 06:45:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:53.005 06:45:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:53.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:53.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:14:53.005 00:14:53.005 --- 10.0.0.2 ping statistics --- 00:14:53.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.005 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:14:53.005 06:45:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:53.005 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:53.005 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:14:53.005 00:14:53.005 --- 10.0.0.3 ping statistics --- 00:14:53.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.005 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:14:53.005 06:45:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:53.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:53.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:14:53.005 00:14:53.005 --- 10.0.0.1 ping statistics --- 00:14:53.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.005 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:14:53.005 06:45:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:53.005 06:45:06 -- nvmf/common.sh@421 -- # return 0 00:14:53.005 06:45:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:53.005 06:45:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:53.005 06:45:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:53.005 06:45:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:53.005 06:45:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:53.005 06:45:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:53.005 06:45:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:53.005 06:45:06 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:14:53.005 06:45:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:53.005 06:45:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:53.005 06:45:06 -- common/autotest_common.sh@10 -- # set +x 00:14:53.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.005 06:45:06 -- nvmf/common.sh@469 -- # nvmfpid=70074 00:14:53.005 06:45:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:53.005 06:45:06 -- nvmf/common.sh@470 -- # waitforlisten 70074 00:14:53.005 06:45:06 -- common/autotest_common.sh@829 -- # '[' -z 70074 ']' 00:14:53.005 06:45:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.005 06:45:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:53.005 06:45:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.005 06:45:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:53.005 06:45:06 -- common/autotest_common.sh@10 -- # set +x 00:14:53.005 [2024-12-14 06:45:06.849231] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:53.005 [2024-12-14 06:45:06.849327] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.005 [2024-12-14 06:45:06.991207] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:53.263 [2024-12-14 06:45:07.060247] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:53.263 [2024-12-14 06:45:07.060668] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.263 [2024-12-14 06:45:07.060854] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.263 [2024-12-14 06:45:07.061050] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.263 [2024-12-14 06:45:07.061420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.263 [2024-12-14 06:45:07.061516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:53.263 [2024-12-14 06:45:07.061522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.239 06:45:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:54.239 06:45:07 -- common/autotest_common.sh@862 -- # return 0 00:14:54.239 06:45:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:54.239 06:45:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:54.239 06:45:07 -- common/autotest_common.sh@10 -- # set +x 00:14:54.239 06:45:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.239 06:45:07 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:54.239 [2024-12-14 06:45:08.097984] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:54.239 06:45:08 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:54.498 Malloc0 00:14:54.498 06:45:08 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:54.756 06:45:08 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:55.014 06:45:08 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:55.273 [2024-12-14 06:45:09.150227] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:55.273 06:45:09 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:55.531 [2024-12-14 06:45:09.426486] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:55.531 06:45:09 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:14:55.790 [2024-12-14 06:45:09.662807] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:14:55.790 06:45:09 -- host/failover.sh@31 -- # bdevperf_pid=70127 00:14:55.790 06:45:09 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:14:55.790 06:45:09 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:55.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:55.790 06:45:09 -- host/failover.sh@34 -- # waitforlisten 70127 /var/tmp/bdevperf.sock 00:14:55.790 06:45:09 -- common/autotest_common.sh@829 -- # '[' -z 70127 ']' 00:14:55.790 06:45:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:55.790 06:45:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:55.791 06:45:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:55.791 06:45:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:55.791 06:45:09 -- common/autotest_common.sh@10 -- # set +x 00:14:56.726 06:45:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:56.726 06:45:10 -- common/autotest_common.sh@862 -- # return 0 00:14:56.726 06:45:10 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:56.985 NVMe0n1 00:14:56.985 06:45:10 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:57.551 00:14:57.551 06:45:11 -- host/failover.sh@39 -- # run_test_pid=70157 00:14:57.551 06:45:11 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:57.551 06:45:11 -- host/failover.sh@41 -- # sleep 1 00:14:58.487 06:45:12 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:58.746 [2024-12-14 06:45:12.550886] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50d00 is same with the state(5) to be set 00:14:58.746 [2024-12-14 06:45:12.550959] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50d00 is same with the state(5) to be set 00:14:58.746 [2024-12-14 06:45:12.550971] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50d00 is same with the state(5) to be set 00:14:58.746 [2024-12-14 06:45:12.550990] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50d00 is same with the state(5) to be set 00:14:58.746 [2024-12-14 06:45:12.550998] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50d00 is same with the state(5) to be set 00:14:58.746 [2024-12-14 06:45:12.551007] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50d00 is same with the state(5) to be set 00:14:58.746 [2024-12-14 06:45:12.551016] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50d00 is same with the state(5) to be set 00:14:58.746 [2024-12-14 06:45:12.551024] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50d00 is same with the state(5) to be set 00:14:58.746 [2024-12-14 06:45:12.551032] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50d00 is same with the state(5) to be set 00:14:58.746 [2024-12-14 06:45:12.551040] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50d00 is same with the state(5) to be set 00:14:58.746 [2024-12-14 06:45:12.551048] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50d00 is same with the state(5) to be set 00:14:58.746 [2024-12-14 06:45:12.551057] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50d00 is same with the state(5) to be set 00:14:58.746 [2024-12-14 06:45:12.551065] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50d00 is same with the state(5) to be set 00:14:58.746 [2024-12-14 06:45:12.551073] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50d00 is same with the state(5) to be set 00:14:58.746 [2024-12-14 06:45:12.551081] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50d00 is same with the state(5) to be set 00:14:58.746 [2024-12-14 06:45:12.551089] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50d00 is same with the state(5) to be set 00:14:58.746 [2024-12-14 06:45:12.551097] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50d00 is same with the state(5) to be set 00:14:58.746 [2024-12-14 06:45:12.551105] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50d00 is same with the state(5) to be set 00:14:58.746 [2024-12-14 06:45:12.551114] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50d00 is same with the state(5) to be set 00:14:58.746 [2024-12-14 06:45:12.551122] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50d00 is same with the state(5) to be set 00:14:58.746 [2024-12-14 06:45:12.551130] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50d00 is same with the state(5) to be set 00:14:58.746 [2024-12-14 06:45:12.551138] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50d00 is same with the state(5) to be set 00:14:58.746 [2024-12-14 06:45:12.551146] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50d00 is same with the state(5) to be set 00:14:58.746 [2024-12-14 06:45:12.551154] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50d00 is same with the state(5) to be set 00:14:58.746 [2024-12-14 06:45:12.551162] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50d00 is same with the state(5) to be set 00:14:58.746 [2024-12-14 06:45:12.551170] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50d00 is same with the state(5) to be set 00:14:58.746 [2024-12-14 06:45:12.551178] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50d00 is same with the state(5) to be set 00:14:58.746 [2024-12-14 06:45:12.551186] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50d00 is same with the state(5) to be set 00:14:58.746 [2024-12-14 06:45:12.551194] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50d00 is same with the state(5) to be set 00:14:58.746 06:45:12 -- host/failover.sh@45 -- # sleep 3 00:15:02.038 06:45:15 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:02.038 00:15:02.038 06:45:15 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:02.297 [2024-12-14 06:45:16.180541] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.297 [2024-12-14 06:45:16.180593] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.297 [2024-12-14 06:45:16.180604] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.297 [2024-12-14 06:45:16.180611] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.297 [2024-12-14 06:45:16.180618] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.297 [2024-12-14 06:45:16.180626] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.297 [2024-12-14 06:45:16.180633] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.297 [2024-12-14 06:45:16.180641] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.297 [2024-12-14 06:45:16.180647] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.297 [2024-12-14 06:45:16.180655] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.297 [2024-12-14 06:45:16.180662] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.297 [2024-12-14 06:45:16.180669] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.297 [2024-12-14 06:45:16.180676] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.297 [2024-12-14 06:45:16.180684] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.297 [2024-12-14 06:45:16.180691] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.297 [2024-12-14 06:45:16.180698] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.297 [2024-12-14 06:45:16.180705] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.297 [2024-12-14 06:45:16.180712] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.297 [2024-12-14 06:45:16.180719] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.297 [2024-12-14 06:45:16.180728] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.297 [2024-12-14 06:45:16.180735] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.297 [2024-12-14 06:45:16.180742] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.297 [2024-12-14 06:45:16.180749] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.297 [2024-12-14 06:45:16.180756] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.297 [2024-12-14 06:45:16.180764] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.297 [2024-12-14 06:45:16.180771] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.297 [2024-12-14 06:45:16.180778] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.297 [2024-12-14 06:45:16.180786] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.298 [2024-12-14 06:45:16.180800] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.298 [2024-12-14 06:45:16.180808] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.298 [2024-12-14 06:45:16.180815] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.298 [2024-12-14 06:45:16.180823] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.298 [2024-12-14 06:45:16.180830] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.298 [2024-12-14 06:45:16.180837] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.298 [2024-12-14 06:45:16.180844] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.298 [2024-12-14 06:45:16.180852] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.298 [2024-12-14 06:45:16.180859] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.298 [2024-12-14 06:45:16.180866] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.298 [2024-12-14 06:45:16.180873] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.298 [2024-12-14 06:45:16.180899] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.298 [2024-12-14 06:45:16.180925] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d513c0 is same with the state(5) to be set 00:15:02.298 06:45:16 -- host/failover.sh@50 -- # sleep 3 00:15:05.608 06:45:19 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:05.608 [2024-12-14 06:45:19.460416] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.608 06:45:19 -- host/failover.sh@55 -- # sleep 1 00:15:06.544 06:45:20 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:06.803 [2024-12-14 06:45:20.744977] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4f9f0 is same with the state(5) to be set 00:15:06.803 [2024-12-14 06:45:20.745029] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4f9f0 is same with the state(5) to be set 00:15:06.803 [2024-12-14 06:45:20.745040] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4f9f0 is same with the state(5) to be set 00:15:06.803 [2024-12-14 06:45:20.745048] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4f9f0 is same with the state(5) to be set 00:15:06.803 [2024-12-14 06:45:20.745056] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4f9f0 is same with the state(5) to be set 00:15:06.803 [2024-12-14 06:45:20.745064] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4f9f0 is same with the state(5) to be set 00:15:06.803 [2024-12-14 06:45:20.745072] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4f9f0 is same with the state(5) to be set 00:15:06.803 [2024-12-14 06:45:20.745079] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4f9f0 is same with the state(5) to be set 00:15:06.803 [2024-12-14 06:45:20.745087] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4f9f0 is same with the state(5) to be set 00:15:06.803 [2024-12-14 06:45:20.745094] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4f9f0 is same with the state(5) to be set 00:15:06.803 [2024-12-14 06:45:20.745102] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4f9f0 is same with the state(5) to be set 00:15:06.803 [2024-12-14 06:45:20.745110] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4f9f0 is same with the state(5) to be set 00:15:06.803 [2024-12-14 06:45:20.745117] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4f9f0 is same with the state(5) to be set 00:15:06.803 [2024-12-14 06:45:20.745125] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4f9f0 is same with the state(5) to be set 00:15:06.803 [2024-12-14 06:45:20.745132] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4f9f0 is same with the state(5) to be set 00:15:06.803 [2024-12-14 06:45:20.745140] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4f9f0 is same with the state(5) to be set 00:15:06.803 [2024-12-14 06:45:20.745148] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4f9f0 is same with the state(5) to be set 00:15:06.803 [2024-12-14 06:45:20.745155] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4f9f0 is same with the state(5) to be set 00:15:06.803 [2024-12-14 06:45:20.745163] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4f9f0 is same with the state(5) to be set 00:15:06.803 [2024-12-14 06:45:20.745170] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4f9f0 is same with the state(5) to be set 00:15:06.803 [2024-12-14 06:45:20.745178] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4f9f0 is same with the state(5) to be set 00:15:06.803 [2024-12-14 06:45:20.745187] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4f9f0 is same with the state(5) to be set 00:15:06.803 [2024-12-14 06:45:20.745195] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4f9f0 is same with the state(5) to be set 00:15:06.803 [2024-12-14 06:45:20.745203] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4f9f0 is same with the state(5) to be set 00:15:06.803 06:45:20 -- host/failover.sh@59 -- # wait 70157 00:15:13.374 0 00:15:13.374 06:45:26 -- host/failover.sh@61 -- # killprocess 70127 00:15:13.374 06:45:26 -- common/autotest_common.sh@936 -- # '[' -z 70127 ']' 00:15:13.374 06:45:26 -- common/autotest_common.sh@940 -- # kill -0 70127 00:15:13.374 06:45:26 -- common/autotest_common.sh@941 -- # uname 00:15:13.374 06:45:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:13.374 06:45:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70127 00:15:13.374 killing process with pid 70127 00:15:13.374 06:45:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:13.374 06:45:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:13.374 06:45:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70127' 00:15:13.374 06:45:26 -- common/autotest_common.sh@955 -- # kill 70127 00:15:13.374 06:45:26 -- common/autotest_common.sh@960 -- # wait 70127 00:15:13.374 06:45:26 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:13.374 [2024-12-14 06:45:09.730500] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:13.374 [2024-12-14 06:45:09.730586] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70127 ] 00:15:13.374 [2024-12-14 06:45:09.864973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.374 [2024-12-14 06:45:09.918483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.374 Running I/O for 15 seconds... 00:15:13.374 [2024-12-14 06:45:12.551255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:127696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.374 [2024-12-14 06:45:12.551316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.374 [2024-12-14 06:45:12.551346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:127704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.374 [2024-12-14 06:45:12.551363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.374 [2024-12-14 06:45:12.551380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:127712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.374 [2024-12-14 06:45:12.551394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.374 [2024-12-14 06:45:12.551410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:127736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.374 [2024-12-14 06:45:12.551424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.374 [2024-12-14 06:45:12.551439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:127744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.374 [2024-12-14 06:45:12.551453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.374 [2024-12-14 06:45:12.551469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.374 [2024-12-14 06:45:12.551483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.374 [2024-12-14 06:45:12.551498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:127784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.374 [2024-12-14 06:45:12.551512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.374 [2024-12-14 06:45:12.551527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.374 [2024-12-14 06:45:12.551541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.374 [2024-12-14 06:45:12.551556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.375 [2024-12-14 06:45:12.551570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.551585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.375 [2024-12-14 06:45:12.551598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.551614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.375 [2024-12-14 06:45:12.551628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.551685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:128368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.375 [2024-12-14 06:45:12.551701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.551716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:128376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.375 [2024-12-14 06:45:12.551730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.551746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.375 [2024-12-14 06:45:12.551760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.551776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:128400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.375 [2024-12-14 06:45:12.551790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.551805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.375 [2024-12-14 06:45:12.551835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.551850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.375 [2024-12-14 06:45:12.551864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.551895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:127824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.375 [2024-12-14 06:45:12.551923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.551942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.375 [2024-12-14 06:45:12.551957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.551973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:127848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.375 [2024-12-14 06:45:12.551987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.552003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:127880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.375 [2024-12-14 06:45:12.552017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.552037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:127904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.375 [2024-12-14 06:45:12.552050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.552066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:127912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.375 [2024-12-14 06:45:12.552080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.552095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:127920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.375 [2024-12-14 06:45:12.552119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.552136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:127928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.375 [2024-12-14 06:45:12.552150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.552166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.375 [2024-12-14 06:45:12.552179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.552195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.375 [2024-12-14 06:45:12.552209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.552225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.375 [2024-12-14 06:45:12.552239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.552255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.375 [2024-12-14 06:45:12.552269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.552285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.375 [2024-12-14 06:45:12.552299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.552315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.375 [2024-12-14 06:45:12.552329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.552345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.375 [2024-12-14 06:45:12.552358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.552374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.375 [2024-12-14 06:45:12.552398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.552421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.375 [2024-12-14 06:45:12.552437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.552453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:128528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.375 [2024-12-14 06:45:12.552467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.552483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.375 [2024-12-14 06:45:12.552496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.552520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.375 [2024-12-14 06:45:12.552535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.552551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.375 [2024-12-14 06:45:12.552565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.552583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.375 [2024-12-14 06:45:12.552613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.552628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.375 [2024-12-14 06:45:12.552656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.552671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.375 [2024-12-14 06:45:12.552684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.552698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.375 [2024-12-14 06:45:12.552711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.552726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.375 [2024-12-14 06:45:12.552739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.552754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:127936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.375 [2024-12-14 06:45:12.552766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.552781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:127952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.375 [2024-12-14 06:45:12.552794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.552809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:127984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.375 [2024-12-14 06:45:12.552822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.552837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:127992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.375 [2024-12-14 06:45:12.552850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.552864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:128016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.375 [2024-12-14 06:45:12.552877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.375 [2024-12-14 06:45:12.552908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.375 [2024-12-14 06:45:12.552939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.552957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.376 [2024-12-14 06:45:12.552971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.552986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.376 [2024-12-14 06:45:12.553000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.553016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.376 [2024-12-14 06:45:12.553030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.553045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:128608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.376 [2024-12-14 06:45:12.553059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.553081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.376 [2024-12-14 06:45:12.553097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.553112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.376 [2024-12-14 06:45:12.553126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.553142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.376 [2024-12-14 06:45:12.553155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.553170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.376 [2024-12-14 06:45:12.553184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.553199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:128648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.376 [2024-12-14 06:45:12.553212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.553227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.376 [2024-12-14 06:45:12.553241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.553256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.376 [2024-12-14 06:45:12.553269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.553300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.376 [2024-12-14 06:45:12.553314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.553338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.376 [2024-12-14 06:45:12.553369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.553384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.376 [2024-12-14 06:45:12.553398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.553413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.376 [2024-12-14 06:45:12.553426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.553442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.376 [2024-12-14 06:45:12.553455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.553471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.376 [2024-12-14 06:45:12.553484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.553500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.376 [2024-12-14 06:45:12.553513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.553528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.376 [2024-12-14 06:45:12.553542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.553557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.376 [2024-12-14 06:45:12.553571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.553589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.376 [2024-12-14 06:45:12.553603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.553618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.376 [2024-12-14 06:45:12.553631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.553661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.376 [2024-12-14 06:45:12.553674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.553689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.376 [2024-12-14 06:45:12.553702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.553717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.376 [2024-12-14 06:45:12.553730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.553751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.376 [2024-12-14 06:45:12.553764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.553779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.376 [2024-12-14 06:45:12.553792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.553807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.376 [2024-12-14 06:45:12.553820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.553836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.376 [2024-12-14 06:45:12.553864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.553879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.376 [2024-12-14 06:45:12.553909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.553924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.376 [2024-12-14 06:45:12.553938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.553963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.376 [2024-12-14 06:45:12.553979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.553996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.376 [2024-12-14 06:45:12.554010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.554025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.376 [2024-12-14 06:45:12.554040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.554056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:128792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.376 [2024-12-14 06:45:12.554070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.554086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.376 [2024-12-14 06:45:12.554100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.554118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.376 [2024-12-14 06:45:12.554132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.554149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.376 [2024-12-14 06:45:12.554170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.554186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:128824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.376 [2024-12-14 06:45:12.554200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.554216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.376 [2024-12-14 06:45:12.554230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.376 [2024-12-14 06:45:12.554246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.377 [2024-12-14 06:45:12.554259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.554275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.377 [2024-12-14 06:45:12.554289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.554305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.377 [2024-12-14 06:45:12.554319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.554334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.377 [2024-12-14 06:45:12.554349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.554364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.377 [2024-12-14 06:45:12.554378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.554394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.377 [2024-12-14 06:45:12.554408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.554424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.377 [2024-12-14 06:45:12.554437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.554453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.377 [2024-12-14 06:45:12.554467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.554483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.377 [2024-12-14 06:45:12.554497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.554512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.377 [2024-12-14 06:45:12.554526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.554548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.377 [2024-12-14 06:45:12.554562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.554578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.377 [2024-12-14 06:45:12.554592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.554609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.377 [2024-12-14 06:45:12.554624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.554640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.377 [2024-12-14 06:45:12.554654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.554669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.377 [2024-12-14 06:45:12.554683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.554699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.377 [2024-12-14 06:45:12.554713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.554729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.377 [2024-12-14 06:45:12.554742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.554758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.377 [2024-12-14 06:45:12.554772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.554803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.377 [2024-12-14 06:45:12.554816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.554831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.377 [2024-12-14 06:45:12.554845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.554860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.377 [2024-12-14 06:45:12.554873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.554889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:128944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.377 [2024-12-14 06:45:12.554941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.554958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.377 [2024-12-14 06:45:12.554979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.554996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.377 [2024-12-14 06:45:12.555010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.555026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.377 [2024-12-14 06:45:12.555040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.555056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.377 [2024-12-14 06:45:12.555070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.555086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.377 [2024-12-14 06:45:12.555100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.555116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.377 [2024-12-14 06:45:12.555129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.555148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.377 [2024-12-14 06:45:12.555163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.555179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.377 [2024-12-14 06:45:12.555193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.555209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:129016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.377 [2024-12-14 06:45:12.555222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.555253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.377 [2024-12-14 06:45:12.555266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.555293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.377 [2024-12-14 06:45:12.555307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.555322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.377 [2024-12-14 06:45:12.555343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.555359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.377 [2024-12-14 06:45:12.555373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.555396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.377 [2024-12-14 06:45:12.555411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.555427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.377 [2024-12-14 06:45:12.555441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.555457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.377 [2024-12-14 06:45:12.555471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.555486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214b970 is same with the state(5) to be set 00:15:13.377 [2024-12-14 06:45:12.555506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:13.377 [2024-12-14 06:45:12.555518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:13.377 [2024-12-14 06:45:12.555529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128440 len:8 PRP1 0x0 PRP2 0x0 00:15:13.377 [2024-12-14 06:45:12.555542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.377 [2024-12-14 06:45:12.555589] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x214b970 was disconnected and freed. reset controller. 00:15:13.377 [2024-12-14 06:45:12.555608] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:13.378 [2024-12-14 06:45:12.555693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.378 [2024-12-14 06:45:12.555714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:12.555729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.378 [2024-12-14 06:45:12.555742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:12.555755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.378 [2024-12-14 06:45:12.555769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:12.555783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.378 [2024-12-14 06:45:12.555795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:12.555809] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:13.378 [2024-12-14 06:45:12.555861] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8690 (9): Bad file descriptor 00:15:13.378 [2024-12-14 06:45:12.558511] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:13.378 [2024-12-14 06:45:12.592134] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:13.378 [2024-12-14 06:45:16.180982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.378 [2024-12-14 06:45:16.181044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:16.181090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.378 [2024-12-14 06:45:16.181109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:16.181125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.378 [2024-12-14 06:45:16.181138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:16.181153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.378 [2024-12-14 06:45:16.181166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:16.181181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.378 [2024-12-14 06:45:16.181194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:16.181209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.378 [2024-12-14 06:45:16.181222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:16.181237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.378 [2024-12-14 06:45:16.181251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:16.181266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.378 [2024-12-14 06:45:16.181279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:16.181294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.378 [2024-12-14 06:45:16.181307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:16.181322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.378 [2024-12-14 06:45:16.181367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:16.181398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.378 [2024-12-14 06:45:16.181412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:16.181426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.378 [2024-12-14 06:45:16.181440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:16.181455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.378 [2024-12-14 06:45:16.181469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:16.181484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.378 [2024-12-14 06:45:16.181497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:16.181521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.378 [2024-12-14 06:45:16.181536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:16.181551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.378 [2024-12-14 06:45:16.181564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:16.181579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.378 [2024-12-14 06:45:16.181594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:16.181609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.378 [2024-12-14 06:45:16.181623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:16.181638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.378 [2024-12-14 06:45:16.181651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:16.181666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.378 [2024-12-14 06:45:16.181680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:16.181695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.378 [2024-12-14 06:45:16.181708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:16.181723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.378 [2024-12-14 06:45:16.181737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:16.181752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.378 [2024-12-14 06:45:16.181766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:16.181781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.378 [2024-12-14 06:45:16.181794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:16.181809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.378 [2024-12-14 06:45:16.181823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:16.181838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.378 [2024-12-14 06:45:16.181866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:16.181896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.378 [2024-12-14 06:45:16.181915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:16.181930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.378 [2024-12-14 06:45:16.181943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:16.181957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.378 [2024-12-14 06:45:16.181969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:16.181983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.378 [2024-12-14 06:45:16.182011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:16.182026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.378 [2024-12-14 06:45:16.182038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.378 [2024-12-14 06:45:16.182053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.378 [2024-12-14 06:45:16.182065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.182079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.379 [2024-12-14 06:45:16.182092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.182107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.379 [2024-12-14 06:45:16.182120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.182134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.379 [2024-12-14 06:45:16.182146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.182176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.379 [2024-12-14 06:45:16.182188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.182202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.379 [2024-12-14 06:45:16.182214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.182228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.379 [2024-12-14 06:45:16.182240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.182254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.379 [2024-12-14 06:45:16.182267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.182287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.379 [2024-12-14 06:45:16.182300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.182314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.379 [2024-12-14 06:45:16.182326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.182340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.379 [2024-12-14 06:45:16.182353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.182366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.379 [2024-12-14 06:45:16.182378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.182393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.379 [2024-12-14 06:45:16.182405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.182419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.379 [2024-12-14 06:45:16.182431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.182445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.379 [2024-12-14 06:45:16.182457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.182471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.379 [2024-12-14 06:45:16.182483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.182497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.379 [2024-12-14 06:45:16.182509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.182523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.379 [2024-12-14 06:45:16.182535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.182549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.379 [2024-12-14 06:45:16.182562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.182575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.379 [2024-12-14 06:45:16.182587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.182601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.379 [2024-12-14 06:45:16.182614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.182633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.379 [2024-12-14 06:45:16.182645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.182659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.379 [2024-12-14 06:45:16.182672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.182685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.379 [2024-12-14 06:45:16.182698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.182712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.379 [2024-12-14 06:45:16.182724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.182738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.379 [2024-12-14 06:45:16.182750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.182764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.379 [2024-12-14 06:45:16.182776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.182790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.379 [2024-12-14 06:45:16.182802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.182816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.379 [2024-12-14 06:45:16.182828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.182842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.379 [2024-12-14 06:45:16.182855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.182868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.379 [2024-12-14 06:45:16.182881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.182946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.379 [2024-12-14 06:45:16.182962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.182977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.379 [2024-12-14 06:45:16.182990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.183005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.379 [2024-12-14 06:45:16.183027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.379 [2024-12-14 06:45:16.183043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.380 [2024-12-14 06:45:16.183056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.380 [2024-12-14 06:45:16.183084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.380 [2024-12-14 06:45:16.183111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.380 [2024-12-14 06:45:16.183139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.380 [2024-12-14 06:45:16.183166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.380 [2024-12-14 06:45:16.183194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.380 [2024-12-14 06:45:16.183222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.380 [2024-12-14 06:45:16.183263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.380 [2024-12-14 06:45:16.183290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.380 [2024-12-14 06:45:16.183317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.380 [2024-12-14 06:45:16.183344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.380 [2024-12-14 06:45:16.183371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.380 [2024-12-14 06:45:16.183420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.380 [2024-12-14 06:45:16.183447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.380 [2024-12-14 06:45:16.183473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.380 [2024-12-14 06:45:16.183499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.380 [2024-12-14 06:45:16.183525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.380 [2024-12-14 06:45:16.183551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.380 [2024-12-14 06:45:16.183577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.380 [2024-12-14 06:45:16.183603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.380 [2024-12-14 06:45:16.183629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.380 [2024-12-14 06:45:16.183655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.380 [2024-12-14 06:45:16.183682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.380 [2024-12-14 06:45:16.183708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.380 [2024-12-14 06:45:16.183733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.380 [2024-12-14 06:45:16.183766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.380 [2024-12-14 06:45:16.183792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.380 [2024-12-14 06:45:16.183818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.380 [2024-12-14 06:45:16.183848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.380 [2024-12-14 06:45:16.183874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.380 [2024-12-14 06:45:16.183900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.380 [2024-12-14 06:45:16.183939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.380 [2024-12-14 06:45:16.183965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.183978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.380 [2024-12-14 06:45:16.183991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.184004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.380 [2024-12-14 06:45:16.184016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.184029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.380 [2024-12-14 06:45:16.184041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.184057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.380 [2024-12-14 06:45:16.184070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.184084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.380 [2024-12-14 06:45:16.184102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.184116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.380 [2024-12-14 06:45:16.184129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.184143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.380 [2024-12-14 06:45:16.184155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.380 [2024-12-14 06:45:16.184168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.380 [2024-12-14 06:45:16.184180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:16.184194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.381 [2024-12-14 06:45:16.184206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:16.184219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.381 [2024-12-14 06:45:16.184231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:16.184245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.381 [2024-12-14 06:45:16.184257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:16.184271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.381 [2024-12-14 06:45:16.184285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:16.184299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.381 [2024-12-14 06:45:16.184311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:16.184325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.381 [2024-12-14 06:45:16.184337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:16.184351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.381 [2024-12-14 06:45:16.184362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:16.184393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.381 [2024-12-14 06:45:16.184406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:16.184420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.381 [2024-12-14 06:45:16.184432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:16.184452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.381 [2024-12-14 06:45:16.184466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:16.184480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.381 [2024-12-14 06:45:16.184492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:16.184508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.381 [2024-12-14 06:45:16.184521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:16.184535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.381 [2024-12-14 06:45:16.184547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:16.184561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.381 [2024-12-14 06:45:16.184574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:16.184588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.381 [2024-12-14 06:45:16.184600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:16.184614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.381 [2024-12-14 06:45:16.184627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:16.184641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.381 [2024-12-14 06:45:16.184653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:16.184667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.381 [2024-12-14 06:45:16.184680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:16.184694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.381 [2024-12-14 06:45:16.184706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:16.184720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.381 [2024-12-14 06:45:16.184734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:16.184749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.381 [2024-12-14 06:45:16.184762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:16.184775] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2130450 is same with the state(5) to be set 00:15:13.381 [2024-12-14 06:45:16.184796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:13.381 [2024-12-14 06:45:16.184807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:13.381 [2024-12-14 06:45:16.184817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1440 len:8 PRP1 0x0 PRP2 0x0 00:15:13.381 [2024-12-14 06:45:16.184829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:16.184873] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2130450 was disconnected and freed. reset controller. 00:15:13.381 [2024-12-14 06:45:16.184906] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:15:13.381 [2024-12-14 06:45:16.184969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.381 [2024-12-14 06:45:16.184990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:16.185004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.381 [2024-12-14 06:45:16.185017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:16.185030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.381 [2024-12-14 06:45:16.185046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:16.185059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.381 [2024-12-14 06:45:16.185072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:16.185085] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:13.381 [2024-12-14 06:45:16.185116] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8690 (9): Bad file descriptor 00:15:13.381 [2024-12-14 06:45:16.187527] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:13.381 [2024-12-14 06:45:16.217547] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:13.381 [2024-12-14 06:45:20.745373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:111480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.381 [2024-12-14 06:45:20.745420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:20.745445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.381 [2024-12-14 06:45:20.745460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:20.745475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.381 [2024-12-14 06:45:20.745488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:20.745502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:112056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.381 [2024-12-14 06:45:20.745514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:20.745528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.381 [2024-12-14 06:45:20.745540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:20.745573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.381 [2024-12-14 06:45:20.745586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:20.745600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:112088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.381 [2024-12-14 06:45:20.745612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:20.745626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.381 [2024-12-14 06:45:20.745638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:20.745652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.381 [2024-12-14 06:45:20.745664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.381 [2024-12-14 06:45:20.745678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.382 [2024-12-14 06:45:20.745689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.745703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.382 [2024-12-14 06:45:20.745715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.745729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:112152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.382 [2024-12-14 06:45:20.745741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.745754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:112160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.382 [2024-12-14 06:45:20.745767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.745780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.382 [2024-12-14 06:45:20.745792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.745806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:112176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.382 [2024-12-14 06:45:20.745818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.745832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:111520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.382 [2024-12-14 06:45:20.745844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.745858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:111544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.382 [2024-12-14 06:45:20.745872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.745886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:111552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.382 [2024-12-14 06:45:20.745940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.745959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:111560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.382 [2024-12-14 06:45:20.745972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.745986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:111584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.382 [2024-12-14 06:45:20.745999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.746013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:111592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.382 [2024-12-14 06:45:20.746043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.746058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:111608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.382 [2024-12-14 06:45:20.746070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.746085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:111648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.382 [2024-12-14 06:45:20.746098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.746113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.382 [2024-12-14 06:45:20.746126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.746141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:112216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.382 [2024-12-14 06:45:20.746154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.746168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:112224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.382 [2024-12-14 06:45:20.746181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.746196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.382 [2024-12-14 06:45:20.746209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.746224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:112240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.382 [2024-12-14 06:45:20.746270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.746285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:112248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.382 [2024-12-14 06:45:20.746299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.746314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:112256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.382 [2024-12-14 06:45:20.746328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.746368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:112264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.382 [2024-12-14 06:45:20.746382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.746397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.382 [2024-12-14 06:45:20.746411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.746426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:112280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.382 [2024-12-14 06:45:20.746439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.746454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.382 [2024-12-14 06:45:20.746468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.746482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:112296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.382 [2024-12-14 06:45:20.746495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.746510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:112304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.382 [2024-12-14 06:45:20.746524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.746539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:112312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.382 [2024-12-14 06:45:20.746552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.746567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:112320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.382 [2024-12-14 06:45:20.746580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.746611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:112328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.382 [2024-12-14 06:45:20.746624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.746639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:112336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.382 [2024-12-14 06:45:20.746667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.746681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:111656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.382 [2024-12-14 06:45:20.746693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.746708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:111680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.382 [2024-12-14 06:45:20.746720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.746749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.382 [2024-12-14 06:45:20.746780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.746795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.382 [2024-12-14 06:45:20.746808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.746822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.382 [2024-12-14 06:45:20.746834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.746848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:111776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.382 [2024-12-14 06:45:20.746860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.746873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:111800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.382 [2024-12-14 06:45:20.746886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.746939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:111816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.382 [2024-12-14 06:45:20.746954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.746969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.382 [2024-12-14 06:45:20.746983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.746998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:112352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.382 [2024-12-14 06:45:20.747011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.382 [2024-12-14 06:45:20.747026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:112360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.383 [2024-12-14 06:45:20.747040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.383 [2024-12-14 06:45:20.747068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:112376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.383 [2024-12-14 06:45:20.747097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.383 [2024-12-14 06:45:20.747125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:112392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.383 [2024-12-14 06:45:20.747154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.383 [2024-12-14 06:45:20.747191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.383 [2024-12-14 06:45:20.747250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.383 [2024-12-14 06:45:20.747277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.383 [2024-12-14 06:45:20.747304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.383 [2024-12-14 06:45:20.747330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:112440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.383 [2024-12-14 06:45:20.747357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.383 [2024-12-14 06:45:20.747384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.383 [2024-12-14 06:45:20.747410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.383 [2024-12-14 06:45:20.747437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:112472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.383 [2024-12-14 06:45:20.747464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.383 [2024-12-14 06:45:20.747491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.383 [2024-12-14 06:45:20.747518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:112496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.383 [2024-12-14 06:45:20.747550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.383 [2024-12-14 06:45:20.747578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:111832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.383 [2024-12-14 06:45:20.747620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:111880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.383 [2024-12-14 06:45:20.747647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:111888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.383 [2024-12-14 06:45:20.747674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:111896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.383 [2024-12-14 06:45:20.747699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.383 [2024-12-14 06:45:20.747726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:111936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.383 [2024-12-14 06:45:20.747751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:111944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.383 [2024-12-14 06:45:20.747777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:111952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.383 [2024-12-14 06:45:20.747803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:112512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.383 [2024-12-14 06:45:20.747829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:112520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.383 [2024-12-14 06:45:20.747855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:112528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.383 [2024-12-14 06:45:20.747881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:112536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.383 [2024-12-14 06:45:20.747913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:112544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.383 [2024-12-14 06:45:20.747950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:112552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.383 [2024-12-14 06:45:20.747976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.747989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:112560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.383 [2024-12-14 06:45:20.748002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.748015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.383 [2024-12-14 06:45:20.748028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.383 [2024-12-14 06:45:20.748041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.384 [2024-12-14 06:45:20.748054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.384 [2024-12-14 06:45:20.748084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:112592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.384 [2024-12-14 06:45:20.748110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:112600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.384 [2024-12-14 06:45:20.748136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.384 [2024-12-14 06:45:20.748162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.384 [2024-12-14 06:45:20.748188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.384 [2024-12-14 06:45:20.748214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.384 [2024-12-14 06:45:20.748239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.384 [2024-12-14 06:45:20.748272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.384 [2024-12-14 06:45:20.748298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:111960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.384 [2024-12-14 06:45:20.748326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:111968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.384 [2024-12-14 06:45:20.748352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:111976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.384 [2024-12-14 06:45:20.748378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:111984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.384 [2024-12-14 06:45:20.748403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.384 [2024-12-14 06:45:20.748429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:112000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.384 [2024-12-14 06:45:20.748455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:112024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.384 [2024-12-14 06:45:20.748481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:112032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.384 [2024-12-14 06:45:20.748507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.384 [2024-12-14 06:45:20.748533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:112664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.384 [2024-12-14 06:45:20.748559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.384 [2024-12-14 06:45:20.748591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.384 [2024-12-14 06:45:20.748618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.384 [2024-12-14 06:45:20.748644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.384 [2024-12-14 06:45:20.748669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.384 [2024-12-14 06:45:20.748695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.384 [2024-12-14 06:45:20.748720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.384 [2024-12-14 06:45:20.748749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.384 [2024-12-14 06:45:20.748775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.384 [2024-12-14 06:45:20.748801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.384 [2024-12-14 06:45:20.748827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.384 [2024-12-14 06:45:20.748853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.384 [2024-12-14 06:45:20.748906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.384 [2024-12-14 06:45:20.748936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.384 [2024-12-14 06:45:20.748970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.748984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:13.384 [2024-12-14 06:45:20.748997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.749012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.384 [2024-12-14 06:45:20.749024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.749038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.384 [2024-12-14 06:45:20.749051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.749065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.384 [2024-12-14 06:45:20.749077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.749091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.384 [2024-12-14 06:45:20.749104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.749118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:112120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.384 [2024-12-14 06:45:20.749131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.749145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.384 [2024-12-14 06:45:20.749158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.384 [2024-12-14 06:45:20.749172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:112192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:13.385 [2024-12-14 06:45:20.749185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.385 [2024-12-14 06:45:20.749219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:13.385 [2024-12-14 06:45:20.749234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:13.385 [2024-12-14 06:45:20.749244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112200 len:8 PRP1 0x0 PRP2 0x0 00:15:13.385 [2024-12-14 06:45:20.749257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.385 [2024-12-14 06:45:20.749316] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x215d8e0 was disconnected and freed. reset controller. 00:15:13.385 [2024-12-14 06:45:20.749333] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:15:13.385 [2024-12-14 06:45:20.749383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.385 [2024-12-14 06:45:20.749402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.385 [2024-12-14 06:45:20.749427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.385 [2024-12-14 06:45:20.749441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.385 [2024-12-14 06:45:20.749453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.385 [2024-12-14 06:45:20.749467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.385 [2024-12-14 06:45:20.749480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.385 [2024-12-14 06:45:20.749492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.385 [2024-12-14 06:45:20.749504] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:13.385 [2024-12-14 06:45:20.749547] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e8690 (9): Bad file descriptor 00:15:13.385 [2024-12-14 06:45:20.751774] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:13.385 [2024-12-14 06:45:20.786551] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:13.385 00:15:13.385 Latency(us) 00:15:13.385 [2024-12-14T06:45:27.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.385 [2024-12-14T06:45:27.377Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:13.385 Verification LBA range: start 0x0 length 0x4000 00:15:13.385 NVMe0n1 : 15.01 13415.46 52.40 334.61 0.00 9291.36 525.03 13941.29 00:15:13.385 [2024-12-14T06:45:27.377Z] =================================================================================================================== 00:15:13.385 [2024-12-14T06:45:27.377Z] Total : 13415.46 52.40 334.61 0.00 9291.36 525.03 13941.29 00:15:13.385 Received shutdown signal, test time was about 15.000000 seconds 00:15:13.385 00:15:13.385 Latency(us) 00:15:13.385 [2024-12-14T06:45:27.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.385 [2024-12-14T06:45:27.377Z] =================================================================================================================== 00:15:13.385 [2024-12-14T06:45:27.377Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:13.385 06:45:26 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:13.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:13.385 06:45:26 -- host/failover.sh@65 -- # count=3 00:15:13.385 06:45:26 -- host/failover.sh@67 -- # (( count != 3 )) 00:15:13.385 06:45:26 -- host/failover.sh@73 -- # bdevperf_pid=70331 00:15:13.385 06:45:26 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:13.385 06:45:26 -- host/failover.sh@75 -- # waitforlisten 70331 /var/tmp/bdevperf.sock 00:15:13.385 06:45:26 -- common/autotest_common.sh@829 -- # '[' -z 70331 ']' 00:15:13.385 06:45:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:13.385 06:45:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:13.385 06:45:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:13.385 06:45:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:13.385 06:45:26 -- common/autotest_common.sh@10 -- # set +x 00:15:13.952 06:45:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:13.952 06:45:27 -- common/autotest_common.sh@862 -- # return 0 00:15:13.952 06:45:27 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:13.952 [2024-12-14 06:45:27.910646] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:13.952 06:45:27 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:14.211 [2024-12-14 06:45:28.146874] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:14.211 06:45:28 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:14.469 NVMe0n1 00:15:14.727 06:45:28 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:14.986 00:15:14.986 06:45:28 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:15.245 00:15:15.245 06:45:29 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:15.245 06:45:29 -- host/failover.sh@82 -- # grep -q NVMe0 00:15:15.503 06:45:29 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:15.762 06:45:29 -- host/failover.sh@87 -- # sleep 3 00:15:19.071 06:45:32 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:19.072 06:45:32 -- host/failover.sh@88 -- # grep -q NVMe0 00:15:19.072 06:45:32 -- host/failover.sh@90 -- # run_test_pid=70408 00:15:19.072 06:45:32 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:19.072 06:45:32 -- host/failover.sh@92 -- # wait 70408 00:15:20.006 0 00:15:20.006 06:45:33 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:20.006 [2024-12-14 06:45:26.702566] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:20.006 [2024-12-14 06:45:26.702669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70331 ] 00:15:20.006 [2024-12-14 06:45:26.842641] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.007 [2024-12-14 06:45:26.899306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.007 [2024-12-14 06:45:29.551630] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:20.007 [2024-12-14 06:45:29.551726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.007 [2024-12-14 06:45:29.551750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.007 [2024-12-14 06:45:29.551767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.007 [2024-12-14 06:45:29.551780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.007 [2024-12-14 06:45:29.551794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.007 [2024-12-14 06:45:29.551806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.007 [2024-12-14 06:45:29.551820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:20.007 [2024-12-14 06:45:29.551833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:20.007 [2024-12-14 06:45:29.551846] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:20.007 [2024-12-14 06:45:29.551920] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:20.007 [2024-12-14 06:45:29.551953] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa3690 (9): Bad file descriptor 00:15:20.007 [2024-12-14 06:45:29.563517] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:20.007 Running I/O for 1 seconds... 00:15:20.007 00:15:20.007 Latency(us) 00:15:20.007 [2024-12-14T06:45:33.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.007 [2024-12-14T06:45:33.999Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:20.007 Verification LBA range: start 0x0 length 0x4000 00:15:20.007 NVMe0n1 : 1.01 13403.50 52.36 0.00 0.00 9500.16 919.74 10902.81 00:15:20.007 [2024-12-14T06:45:33.999Z] =================================================================================================================== 00:15:20.007 [2024-12-14T06:45:33.999Z] Total : 13403.50 52.36 0.00 0.00 9500.16 919.74 10902.81 00:15:20.007 06:45:33 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:20.007 06:45:33 -- host/failover.sh@95 -- # grep -q NVMe0 00:15:20.265 06:45:34 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:20.524 06:45:34 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:20.524 06:45:34 -- host/failover.sh@99 -- # grep -q NVMe0 00:15:20.782 06:45:34 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:21.347 06:45:35 -- host/failover.sh@101 -- # sleep 3 00:15:24.631 06:45:38 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:24.631 06:45:38 -- host/failover.sh@103 -- # grep -q NVMe0 00:15:24.631 06:45:38 -- host/failover.sh@108 -- # killprocess 70331 00:15:24.631 06:45:38 -- common/autotest_common.sh@936 -- # '[' -z 70331 ']' 00:15:24.631 06:45:38 -- common/autotest_common.sh@940 -- # kill -0 70331 00:15:24.631 06:45:38 -- common/autotest_common.sh@941 -- # uname 00:15:24.631 06:45:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:24.631 06:45:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70331 00:15:24.631 killing process with pid 70331 00:15:24.631 06:45:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:24.631 06:45:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:24.631 06:45:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70331' 00:15:24.631 06:45:38 -- common/autotest_common.sh@955 -- # kill 70331 00:15:24.631 06:45:38 -- common/autotest_common.sh@960 -- # wait 70331 00:15:24.631 06:45:38 -- host/failover.sh@110 -- # sync 00:15:24.631 06:45:38 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:24.889 06:45:38 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:24.889 06:45:38 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:24.889 06:45:38 -- host/failover.sh@116 -- # nvmftestfini 00:15:24.889 06:45:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:24.889 06:45:38 -- nvmf/common.sh@116 -- # sync 00:15:24.889 06:45:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:24.889 06:45:38 -- nvmf/common.sh@119 -- # set +e 00:15:24.889 06:45:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:24.889 06:45:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:24.889 rmmod nvme_tcp 00:15:24.889 rmmod nvme_fabrics 00:15:24.889 rmmod nvme_keyring 00:15:25.148 06:45:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:25.148 06:45:38 -- nvmf/common.sh@123 -- # set -e 00:15:25.148 06:45:38 -- nvmf/common.sh@124 -- # return 0 00:15:25.148 06:45:38 -- nvmf/common.sh@477 -- # '[' -n 70074 ']' 00:15:25.148 06:45:38 -- nvmf/common.sh@478 -- # killprocess 70074 00:15:25.148 06:45:38 -- common/autotest_common.sh@936 -- # '[' -z 70074 ']' 00:15:25.148 06:45:38 -- common/autotest_common.sh@940 -- # kill -0 70074 00:15:25.148 06:45:38 -- common/autotest_common.sh@941 -- # uname 00:15:25.148 06:45:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:25.148 06:45:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70074 00:15:25.148 killing process with pid 70074 00:15:25.148 06:45:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:25.148 06:45:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:25.148 06:45:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70074' 00:15:25.148 06:45:38 -- common/autotest_common.sh@955 -- # kill 70074 00:15:25.148 06:45:38 -- common/autotest_common.sh@960 -- # wait 70074 00:15:25.148 06:45:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:25.148 06:45:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:25.148 06:45:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:25.148 06:45:39 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:25.148 06:45:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:25.148 06:45:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.148 06:45:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.148 06:45:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.406 06:45:39 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:25.406 00:15:25.406 real 0m32.921s 00:15:25.406 user 2m7.643s 00:15:25.406 sys 0m5.271s 00:15:25.406 06:45:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:25.406 06:45:39 -- common/autotest_common.sh@10 -- # set +x 00:15:25.406 ************************************ 00:15:25.406 END TEST nvmf_failover 00:15:25.406 ************************************ 00:15:25.406 06:45:39 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:25.406 06:45:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:25.406 06:45:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:25.406 06:45:39 -- common/autotest_common.sh@10 -- # set +x 00:15:25.406 ************************************ 00:15:25.406 START TEST nvmf_discovery 00:15:25.406 ************************************ 00:15:25.406 06:45:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:25.406 * Looking for test storage... 00:15:25.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:25.406 06:45:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:25.406 06:45:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:25.406 06:45:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:25.406 06:45:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:25.406 06:45:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:25.406 06:45:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:25.406 06:45:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:25.406 06:45:39 -- scripts/common.sh@335 -- # IFS=.-: 00:15:25.406 06:45:39 -- scripts/common.sh@335 -- # read -ra ver1 00:15:25.406 06:45:39 -- scripts/common.sh@336 -- # IFS=.-: 00:15:25.406 06:45:39 -- scripts/common.sh@336 -- # read -ra ver2 00:15:25.406 06:45:39 -- scripts/common.sh@337 -- # local 'op=<' 00:15:25.406 06:45:39 -- scripts/common.sh@339 -- # ver1_l=2 00:15:25.406 06:45:39 -- scripts/common.sh@340 -- # ver2_l=1 00:15:25.407 06:45:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:25.407 06:45:39 -- scripts/common.sh@343 -- # case "$op" in 00:15:25.407 06:45:39 -- scripts/common.sh@344 -- # : 1 00:15:25.407 06:45:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:25.407 06:45:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:25.407 06:45:39 -- scripts/common.sh@364 -- # decimal 1 00:15:25.407 06:45:39 -- scripts/common.sh@352 -- # local d=1 00:15:25.407 06:45:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:25.407 06:45:39 -- scripts/common.sh@354 -- # echo 1 00:15:25.407 06:45:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:25.407 06:45:39 -- scripts/common.sh@365 -- # decimal 2 00:15:25.407 06:45:39 -- scripts/common.sh@352 -- # local d=2 00:15:25.407 06:45:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:25.407 06:45:39 -- scripts/common.sh@354 -- # echo 2 00:15:25.665 06:45:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:25.665 06:45:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:25.665 06:45:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:25.665 06:45:39 -- scripts/common.sh@367 -- # return 0 00:15:25.665 06:45:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:25.665 06:45:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:25.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.665 --rc genhtml_branch_coverage=1 00:15:25.665 --rc genhtml_function_coverage=1 00:15:25.665 --rc genhtml_legend=1 00:15:25.665 --rc geninfo_all_blocks=1 00:15:25.666 --rc geninfo_unexecuted_blocks=1 00:15:25.666 00:15:25.666 ' 00:15:25.666 06:45:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:25.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.666 --rc genhtml_branch_coverage=1 00:15:25.666 --rc genhtml_function_coverage=1 00:15:25.666 --rc genhtml_legend=1 00:15:25.666 --rc geninfo_all_blocks=1 00:15:25.666 --rc geninfo_unexecuted_blocks=1 00:15:25.666 00:15:25.666 ' 00:15:25.666 06:45:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:25.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.666 --rc genhtml_branch_coverage=1 00:15:25.666 --rc genhtml_function_coverage=1 00:15:25.666 --rc genhtml_legend=1 00:15:25.666 --rc geninfo_all_blocks=1 00:15:25.666 --rc geninfo_unexecuted_blocks=1 00:15:25.666 00:15:25.666 ' 00:15:25.666 06:45:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:25.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.666 --rc genhtml_branch_coverage=1 00:15:25.666 --rc genhtml_function_coverage=1 00:15:25.666 --rc genhtml_legend=1 00:15:25.666 --rc geninfo_all_blocks=1 00:15:25.666 --rc geninfo_unexecuted_blocks=1 00:15:25.666 00:15:25.666 ' 00:15:25.666 06:45:39 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:25.666 06:45:39 -- nvmf/common.sh@7 -- # uname -s 00:15:25.666 06:45:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:25.666 06:45:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:25.666 06:45:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:25.666 06:45:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:25.666 06:45:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:25.666 06:45:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:25.666 06:45:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:25.666 06:45:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:25.666 06:45:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:25.666 06:45:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:25.666 06:45:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 00:15:25.666 06:45:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=1897a557-42a7-4044-982a-fbab8b2b3e32 00:15:25.666 06:45:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:25.666 06:45:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:25.666 06:45:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:25.666 06:45:39 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:25.666 06:45:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:25.666 06:45:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:25.666 06:45:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:25.666 06:45:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.666 06:45:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.666 06:45:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.666 06:45:39 -- paths/export.sh@5 -- # export PATH 00:15:25.666 06:45:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.666 06:45:39 -- nvmf/common.sh@46 -- # : 0 00:15:25.666 06:45:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:25.666 06:45:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:25.666 06:45:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:25.666 06:45:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:25.666 06:45:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:25.666 06:45:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:25.666 06:45:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:25.666 06:45:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:25.666 06:45:39 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:25.666 06:45:39 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:25.666 06:45:39 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:25.666 06:45:39 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:25.666 06:45:39 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:25.666 06:45:39 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:25.666 06:45:39 -- host/discovery.sh@25 -- # nvmftestinit 00:15:25.666 06:45:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:25.666 06:45:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:25.666 06:45:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:25.666 06:45:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:25.666 06:45:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:25.666 06:45:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.666 06:45:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.666 06:45:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.666 06:45:39 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:25.666 06:45:39 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:25.666 06:45:39 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:25.666 06:45:39 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:25.666 06:45:39 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:25.666 06:45:39 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:25.666 06:45:39 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.666 06:45:39 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:25.666 06:45:39 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:25.666 06:45:39 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:25.666 06:45:39 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:25.666 06:45:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:25.666 06:45:39 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:25.666 06:45:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.666 06:45:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:25.666 06:45:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:25.666 06:45:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:25.666 06:45:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:25.666 06:45:39 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:25.666 06:45:39 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:25.666 Cannot find device "nvmf_tgt_br" 00:15:25.666 06:45:39 -- nvmf/common.sh@154 -- # true 00:15:25.666 06:45:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:25.666 Cannot find device "nvmf_tgt_br2" 00:15:25.666 06:45:39 -- nvmf/common.sh@155 -- # true 00:15:25.666 06:45:39 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:25.666 06:45:39 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:25.666 Cannot find device "nvmf_tgt_br" 00:15:25.666 06:45:39 -- nvmf/common.sh@157 -- # true 00:15:25.666 06:45:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:25.666 Cannot find device "nvmf_tgt_br2" 00:15:25.666 06:45:39 -- nvmf/common.sh@158 -- # true 00:15:25.666 06:45:39 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:25.666 06:45:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:25.666 06:45:39 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:25.666 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:25.666 06:45:39 -- nvmf/common.sh@161 -- # true 00:15:25.666 06:45:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:25.666 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:25.666 06:45:39 -- nvmf/common.sh@162 -- # true 00:15:25.666 06:45:39 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:25.666 06:45:39 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:25.666 06:45:39 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:25.666 06:45:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:25.666 06:45:39 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:25.666 06:45:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:25.925 06:45:39 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:25.925 06:45:39 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:25.925 06:45:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:25.925 06:45:39 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:25.925 06:45:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:25.925 06:45:39 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:25.925 06:45:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:25.925 06:45:39 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:25.925 06:45:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:25.925 06:45:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:25.925 06:45:39 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:25.925 06:45:39 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:25.925 06:45:39 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:25.925 06:45:39 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:25.925 06:45:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:25.925 06:45:39 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:25.925 06:45:39 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:25.925 06:45:39 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:25.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:25.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:15:25.925 00:15:25.925 --- 10.0.0.2 ping statistics --- 00:15:25.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.925 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:15:25.925 06:45:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:25.925 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:25.925 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:15:25.925 00:15:25.925 --- 10.0.0.3 ping statistics --- 00:15:25.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.925 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:15:25.925 06:45:39 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:25.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:25.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:15:25.925 00:15:25.925 --- 10.0.0.1 ping statistics --- 00:15:25.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.925 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:15:25.925 06:45:39 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:25.925 06:45:39 -- nvmf/common.sh@421 -- # return 0 00:15:25.925 06:45:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:25.925 06:45:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:25.925 06:45:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:25.925 06:45:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:25.925 06:45:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:25.925 06:45:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:25.925 06:45:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:25.925 06:45:39 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:25.925 06:45:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:25.925 06:45:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:25.925 06:45:39 -- common/autotest_common.sh@10 -- # set +x 00:15:25.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.925 06:45:39 -- nvmf/common.sh@469 -- # nvmfpid=70685 00:15:25.925 06:45:39 -- nvmf/common.sh@470 -- # waitforlisten 70685 00:15:25.925 06:45:39 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:25.925 06:45:39 -- common/autotest_common.sh@829 -- # '[' -z 70685 ']' 00:15:25.925 06:45:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.925 06:45:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:25.925 06:45:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.925 06:45:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:25.925 06:45:39 -- common/autotest_common.sh@10 -- # set +x 00:15:25.925 [2024-12-14 06:45:39.868662] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:25.925 [2024-12-14 06:45:39.868967] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.184 [2024-12-14 06:45:40.003427] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.184 [2024-12-14 06:45:40.060020] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:26.184 [2024-12-14 06:45:40.060151] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.185 [2024-12-14 06:45:40.060163] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.185 [2024-12-14 06:45:40.060171] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.185 [2024-12-14 06:45:40.060202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.120 06:45:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:27.120 06:45:40 -- common/autotest_common.sh@862 -- # return 0 00:15:27.120 06:45:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:27.120 06:45:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:27.120 06:45:40 -- common/autotest_common.sh@10 -- # set +x 00:15:27.120 06:45:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:27.120 06:45:40 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:27.120 06:45:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.120 06:45:40 -- common/autotest_common.sh@10 -- # set +x 00:15:27.120 [2024-12-14 06:45:40.912839] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:27.120 06:45:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.120 06:45:40 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:15:27.120 06:45:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.120 06:45:40 -- common/autotest_common.sh@10 -- # set +x 00:15:27.120 [2024-12-14 06:45:40.920957] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:27.120 06:45:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.120 06:45:40 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:27.120 06:45:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.120 06:45:40 -- common/autotest_common.sh@10 -- # set +x 00:15:27.120 null0 00:15:27.120 06:45:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.120 06:45:40 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:27.120 06:45:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.120 06:45:40 -- common/autotest_common.sh@10 -- # set +x 00:15:27.120 null1 00:15:27.120 06:45:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.120 06:45:40 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:27.120 06:45:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.120 06:45:40 -- common/autotest_common.sh@10 -- # set +x 00:15:27.120 06:45:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.120 06:45:40 -- host/discovery.sh@45 -- # hostpid=70717 00:15:27.120 06:45:40 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:27.120 06:45:40 -- host/discovery.sh@46 -- # waitforlisten 70717 /tmp/host.sock 00:15:27.120 06:45:40 -- common/autotest_common.sh@829 -- # '[' -z 70717 ']' 00:15:27.120 06:45:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:15:27.120 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:27.120 06:45:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:27.120 06:45:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:27.120 06:45:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:27.120 06:45:40 -- common/autotest_common.sh@10 -- # set +x 00:15:27.120 [2024-12-14 06:45:41.006023] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:27.120 [2024-12-14 06:45:41.006122] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70717 ] 00:15:27.407 [2024-12-14 06:45:41.144947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.407 [2024-12-14 06:45:41.212799] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:27.407 [2024-12-14 06:45:41.213032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.342 06:45:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:28.342 06:45:41 -- common/autotest_common.sh@862 -- # return 0 00:15:28.342 06:45:41 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:28.342 06:45:41 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:15:28.342 06:45:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.342 06:45:41 -- common/autotest_common.sh@10 -- # set +x 00:15:28.342 06:45:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.342 06:45:41 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:15:28.342 06:45:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.342 06:45:41 -- common/autotest_common.sh@10 -- # set +x 00:15:28.342 06:45:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.342 06:45:41 -- host/discovery.sh@72 -- # notify_id=0 00:15:28.342 06:45:42 -- host/discovery.sh@78 -- # get_subsystem_names 00:15:28.342 06:45:42 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:28.342 06:45:42 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:28.342 06:45:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.342 06:45:42 -- common/autotest_common.sh@10 -- # set +x 00:15:28.342 06:45:42 -- host/discovery.sh@59 -- # sort 00:15:28.342 06:45:42 -- host/discovery.sh@59 -- # xargs 00:15:28.342 06:45:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.342 06:45:42 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:15:28.342 06:45:42 -- host/discovery.sh@79 -- # get_bdev_list 00:15:28.342 06:45:42 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:28.342 06:45:42 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:28.342 06:45:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.342 06:45:42 -- common/autotest_common.sh@10 -- # set +x 00:15:28.342 06:45:42 -- host/discovery.sh@55 -- # sort 00:15:28.342 06:45:42 -- host/discovery.sh@55 -- # xargs 00:15:28.342 06:45:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.342 06:45:42 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:15:28.342 06:45:42 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:15:28.342 06:45:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.342 06:45:42 -- common/autotest_common.sh@10 -- # set +x 00:15:28.342 06:45:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.342 06:45:42 -- host/discovery.sh@82 -- # get_subsystem_names 00:15:28.342 06:45:42 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:28.342 06:45:42 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:28.342 06:45:42 -- host/discovery.sh@59 -- # sort 00:15:28.342 06:45:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.342 06:45:42 -- common/autotest_common.sh@10 -- # set +x 00:15:28.342 06:45:42 -- host/discovery.sh@59 -- # xargs 00:15:28.342 06:45:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.342 06:45:42 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:15:28.342 06:45:42 -- host/discovery.sh@83 -- # get_bdev_list 00:15:28.342 06:45:42 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:28.342 06:45:42 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:28.342 06:45:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.342 06:45:42 -- common/autotest_common.sh@10 -- # set +x 00:15:28.342 06:45:42 -- host/discovery.sh@55 -- # sort 00:15:28.342 06:45:42 -- host/discovery.sh@55 -- # xargs 00:15:28.342 06:45:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.342 06:45:42 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:15:28.342 06:45:42 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:15:28.342 06:45:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.342 06:45:42 -- common/autotest_common.sh@10 -- # set +x 00:15:28.342 06:45:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.342 06:45:42 -- host/discovery.sh@86 -- # get_subsystem_names 00:15:28.342 06:45:42 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:28.342 06:45:42 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:28.342 06:45:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.342 06:45:42 -- common/autotest_common.sh@10 -- # set +x 00:15:28.342 06:45:42 -- host/discovery.sh@59 -- # sort 00:15:28.342 06:45:42 -- host/discovery.sh@59 -- # xargs 00:15:28.342 06:45:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.342 06:45:42 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:15:28.342 06:45:42 -- host/discovery.sh@87 -- # get_bdev_list 00:15:28.342 06:45:42 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:28.342 06:45:42 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:28.342 06:45:42 -- host/discovery.sh@55 -- # sort 00:15:28.342 06:45:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.342 06:45:42 -- host/discovery.sh@55 -- # xargs 00:15:28.342 06:45:42 -- common/autotest_common.sh@10 -- # set +x 00:15:28.600 06:45:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.600 06:45:42 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:15:28.600 06:45:42 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:28.600 06:45:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.600 06:45:42 -- common/autotest_common.sh@10 -- # set +x 00:15:28.600 [2024-12-14 06:45:42.381409] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:28.600 06:45:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.600 06:45:42 -- host/discovery.sh@92 -- # get_subsystem_names 00:15:28.600 06:45:42 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:28.600 06:45:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.600 06:45:42 -- host/discovery.sh@59 -- # sort 00:15:28.600 06:45:42 -- common/autotest_common.sh@10 -- # set +x 00:15:28.600 06:45:42 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:28.600 06:45:42 -- host/discovery.sh@59 -- # xargs 00:15:28.601 06:45:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.601 06:45:42 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:15:28.601 06:45:42 -- host/discovery.sh@93 -- # get_bdev_list 00:15:28.601 06:45:42 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:28.601 06:45:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.601 06:45:42 -- common/autotest_common.sh@10 -- # set +x 00:15:28.601 06:45:42 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:28.601 06:45:42 -- host/discovery.sh@55 -- # sort 00:15:28.601 06:45:42 -- host/discovery.sh@55 -- # xargs 00:15:28.601 06:45:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.601 06:45:42 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:15:28.601 06:45:42 -- host/discovery.sh@94 -- # get_notification_count 00:15:28.601 06:45:42 -- host/discovery.sh@74 -- # jq '. | length' 00:15:28.601 06:45:42 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:28.601 06:45:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.601 06:45:42 -- common/autotest_common.sh@10 -- # set +x 00:15:28.601 06:45:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.601 06:45:42 -- host/discovery.sh@74 -- # notification_count=0 00:15:28.601 06:45:42 -- host/discovery.sh@75 -- # notify_id=0 00:15:28.601 06:45:42 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:15:28.601 06:45:42 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:15:28.601 06:45:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.601 06:45:42 -- common/autotest_common.sh@10 -- # set +x 00:15:28.601 06:45:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.601 06:45:42 -- host/discovery.sh@100 -- # sleep 1 00:15:29.167 [2024-12-14 06:45:43.005765] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:29.167 [2024-12-14 06:45:43.005822] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:29.167 [2024-12-14 06:45:43.005840] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:29.167 [2024-12-14 06:45:43.011811] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:15:29.167 [2024-12-14 06:45:43.067596] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:29.167 [2024-12-14 06:45:43.067624] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:29.735 06:45:43 -- host/discovery.sh@101 -- # get_subsystem_names 00:15:29.735 06:45:43 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:29.735 06:45:43 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:29.735 06:45:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.735 06:45:43 -- common/autotest_common.sh@10 -- # set +x 00:15:29.735 06:45:43 -- host/discovery.sh@59 -- # sort 00:15:29.735 06:45:43 -- host/discovery.sh@59 -- # xargs 00:15:29.735 06:45:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.735 06:45:43 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.735 06:45:43 -- host/discovery.sh@102 -- # get_bdev_list 00:15:29.735 06:45:43 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:29.735 06:45:43 -- host/discovery.sh@55 -- # sort 00:15:29.735 06:45:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.735 06:45:43 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:29.735 06:45:43 -- common/autotest_common.sh@10 -- # set +x 00:15:29.735 06:45:43 -- host/discovery.sh@55 -- # xargs 00:15:29.735 06:45:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.735 06:45:43 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:15:29.735 06:45:43 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:15:29.735 06:45:43 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:29.735 06:45:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.735 06:45:43 -- common/autotest_common.sh@10 -- # set +x 00:15:29.735 06:45:43 -- host/discovery.sh@63 -- # sort -n 00:15:29.735 06:45:43 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:29.735 06:45:43 -- host/discovery.sh@63 -- # xargs 00:15:29.735 06:45:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.735 06:45:43 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:15:29.735 06:45:43 -- host/discovery.sh@104 -- # get_notification_count 00:15:29.735 06:45:43 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:29.735 06:45:43 -- host/discovery.sh@74 -- # jq '. | length' 00:15:29.735 06:45:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.735 06:45:43 -- common/autotest_common.sh@10 -- # set +x 00:15:29.735 06:45:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.993 06:45:43 -- host/discovery.sh@74 -- # notification_count=1 00:15:29.993 06:45:43 -- host/discovery.sh@75 -- # notify_id=1 00:15:29.993 06:45:43 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:15:29.993 06:45:43 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:15:29.993 06:45:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.993 06:45:43 -- common/autotest_common.sh@10 -- # set +x 00:15:29.993 06:45:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.993 06:45:43 -- host/discovery.sh@109 -- # sleep 1 00:15:30.929 06:45:44 -- host/discovery.sh@110 -- # get_bdev_list 00:15:30.929 06:45:44 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:30.929 06:45:44 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:30.929 06:45:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.929 06:45:44 -- common/autotest_common.sh@10 -- # set +x 00:15:30.929 06:45:44 -- host/discovery.sh@55 -- # sort 00:15:30.929 06:45:44 -- host/discovery.sh@55 -- # xargs 00:15:30.929 06:45:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.929 06:45:44 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:30.929 06:45:44 -- host/discovery.sh@111 -- # get_notification_count 00:15:30.929 06:45:44 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:15:30.929 06:45:44 -- host/discovery.sh@74 -- # jq '. | length' 00:15:30.929 06:45:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.929 06:45:44 -- common/autotest_common.sh@10 -- # set +x 00:15:30.929 06:45:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.929 06:45:44 -- host/discovery.sh@74 -- # notification_count=1 00:15:30.929 06:45:44 -- host/discovery.sh@75 -- # notify_id=2 00:15:30.929 06:45:44 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:15:30.929 06:45:44 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:15:30.929 06:45:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.929 06:45:44 -- common/autotest_common.sh@10 -- # set +x 00:15:30.929 [2024-12-14 06:45:44.884555] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:30.929 [2024-12-14 06:45:44.885578] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:30.929 [2024-12-14 06:45:44.885606] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:30.929 06:45:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.929 06:45:44 -- host/discovery.sh@117 -- # sleep 1 00:15:30.929 [2024-12-14 06:45:44.891575] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:15:31.188 [2024-12-14 06:45:44.948830] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:31.188 [2024-12-14 06:45:44.948856] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:31.188 [2024-12-14 06:45:44.948879] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:32.125 06:45:45 -- host/discovery.sh@118 -- # get_subsystem_names 00:15:32.125 06:45:45 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:32.125 06:45:45 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:32.125 06:45:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.125 06:45:45 -- host/discovery.sh@59 -- # sort 00:15:32.125 06:45:45 -- common/autotest_common.sh@10 -- # set +x 00:15:32.125 06:45:45 -- host/discovery.sh@59 -- # xargs 00:15:32.125 06:45:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.125 06:45:45 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.125 06:45:45 -- host/discovery.sh@119 -- # get_bdev_list 00:15:32.125 06:45:45 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:32.125 06:45:45 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:32.125 06:45:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.125 06:45:45 -- common/autotest_common.sh@10 -- # set +x 00:15:32.125 06:45:45 -- host/discovery.sh@55 -- # sort 00:15:32.125 06:45:45 -- host/discovery.sh@55 -- # xargs 00:15:32.125 06:45:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.125 06:45:46 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:32.125 06:45:46 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:15:32.125 06:45:46 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:32.125 06:45:46 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:32.125 06:45:46 -- host/discovery.sh@63 -- # sort -n 00:15:32.125 06:45:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.125 06:45:46 -- host/discovery.sh@63 -- # xargs 00:15:32.125 06:45:46 -- common/autotest_common.sh@10 -- # set +x 00:15:32.125 06:45:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.125 06:45:46 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:15:32.125 06:45:46 -- host/discovery.sh@121 -- # get_notification_count 00:15:32.125 06:45:46 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:32.125 06:45:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.125 06:45:46 -- host/discovery.sh@74 -- # jq '. | length' 00:15:32.125 06:45:46 -- common/autotest_common.sh@10 -- # set +x 00:15:32.125 06:45:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.384 06:45:46 -- host/discovery.sh@74 -- # notification_count=0 00:15:32.384 06:45:46 -- host/discovery.sh@75 -- # notify_id=2 00:15:32.384 06:45:46 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:15:32.384 06:45:46 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:32.384 06:45:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.384 06:45:46 -- common/autotest_common.sh@10 -- # set +x 00:15:32.384 [2024-12-14 06:45:46.126784] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:32.384 [2024-12-14 06:45:46.126817] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:32.384 06:45:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.384 06:45:46 -- host/discovery.sh@127 -- # sleep 1 00:15:32.384 [2024-12-14 06:45:46.132773] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:15:32.384 [2024-12-14 06:45:46.132802] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:32.384 [2024-12-14 06:45:46.132911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:32.384 [2024-12-14 06:45:46.132963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:32.384 [2024-12-14 06:45:46.132977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:32.384 [2024-12-14 06:45:46.132986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:32.384 [2024-12-14 06:45:46.132996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:32.384 [2024-12-14 06:45:46.133005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:32.384 [2024-12-14 06:45:46.133015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:32.384 [2024-12-14 06:45:46.133023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:32.384 [2024-12-14 06:45:46.133033] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fac10 is same with the state(5) to be set 00:15:33.319 06:45:47 -- host/discovery.sh@128 -- # get_subsystem_names 00:15:33.319 06:45:47 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:33.319 06:45:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.319 06:45:47 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:33.319 06:45:47 -- host/discovery.sh@59 -- # sort 00:15:33.319 06:45:47 -- common/autotest_common.sh@10 -- # set +x 00:15:33.319 06:45:47 -- host/discovery.sh@59 -- # xargs 00:15:33.319 06:45:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.319 06:45:47 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.319 06:45:47 -- host/discovery.sh@129 -- # get_bdev_list 00:15:33.319 06:45:47 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:33.319 06:45:47 -- host/discovery.sh@55 -- # xargs 00:15:33.319 06:45:47 -- host/discovery.sh@55 -- # sort 00:15:33.319 06:45:47 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:33.319 06:45:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.319 06:45:47 -- common/autotest_common.sh@10 -- # set +x 00:15:33.319 06:45:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.319 06:45:47 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:33.319 06:45:47 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:15:33.319 06:45:47 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:33.319 06:45:47 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:33.319 06:45:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.319 06:45:47 -- common/autotest_common.sh@10 -- # set +x 00:15:33.319 06:45:47 -- host/discovery.sh@63 -- # sort -n 00:15:33.319 06:45:47 -- host/discovery.sh@63 -- # xargs 00:15:33.319 06:45:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.319 06:45:47 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:15:33.319 06:45:47 -- host/discovery.sh@131 -- # get_notification_count 00:15:33.319 06:45:47 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:33.319 06:45:47 -- host/discovery.sh@74 -- # jq '. | length' 00:15:33.319 06:45:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.319 06:45:47 -- common/autotest_common.sh@10 -- # set +x 00:15:33.578 06:45:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.578 06:45:47 -- host/discovery.sh@74 -- # notification_count=0 00:15:33.578 06:45:47 -- host/discovery.sh@75 -- # notify_id=2 00:15:33.578 06:45:47 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:15:33.578 06:45:47 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:15:33.578 06:45:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.578 06:45:47 -- common/autotest_common.sh@10 -- # set +x 00:15:33.578 06:45:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.578 06:45:47 -- host/discovery.sh@135 -- # sleep 1 00:15:34.513 06:45:48 -- host/discovery.sh@136 -- # get_subsystem_names 00:15:34.513 06:45:48 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:34.513 06:45:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.513 06:45:48 -- common/autotest_common.sh@10 -- # set +x 00:15:34.513 06:45:48 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:34.513 06:45:48 -- host/discovery.sh@59 -- # sort 00:15:34.513 06:45:48 -- host/discovery.sh@59 -- # xargs 00:15:34.513 06:45:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.513 06:45:48 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:15:34.513 06:45:48 -- host/discovery.sh@137 -- # get_bdev_list 00:15:34.513 06:45:48 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:34.513 06:45:48 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:34.513 06:45:48 -- host/discovery.sh@55 -- # sort 00:15:34.513 06:45:48 -- host/discovery.sh@55 -- # xargs 00:15:34.513 06:45:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.513 06:45:48 -- common/autotest_common.sh@10 -- # set +x 00:15:34.513 06:45:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.513 06:45:48 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:15:34.513 06:45:48 -- host/discovery.sh@138 -- # get_notification_count 00:15:34.513 06:45:48 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:34.513 06:45:48 -- host/discovery.sh@74 -- # jq '. | length' 00:15:34.513 06:45:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.513 06:45:48 -- common/autotest_common.sh@10 -- # set +x 00:15:34.513 06:45:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.772 06:45:48 -- host/discovery.sh@74 -- # notification_count=2 00:15:34.772 06:45:48 -- host/discovery.sh@75 -- # notify_id=4 00:15:34.772 06:45:48 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:15:34.772 06:45:48 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:34.772 06:45:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.772 06:45:48 -- common/autotest_common.sh@10 -- # set +x 00:15:35.708 [2024-12-14 06:45:49.538329] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:35.708 [2024-12-14 06:45:49.538355] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:35.708 [2024-12-14 06:45:49.538372] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:35.708 [2024-12-14 06:45:49.544372] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:15:35.708 [2024-12-14 06:45:49.603432] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:35.708 [2024-12-14 06:45:49.603655] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:35.708 06:45:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.708 06:45:49 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:35.708 06:45:49 -- common/autotest_common.sh@650 -- # local es=0 00:15:35.708 06:45:49 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:35.708 06:45:49 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:35.708 06:45:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:35.708 06:45:49 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:35.708 06:45:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:35.708 06:45:49 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:35.708 06:45:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.708 06:45:49 -- common/autotest_common.sh@10 -- # set +x 00:15:35.708 request: 00:15:35.708 { 00:15:35.708 "name": "nvme", 00:15:35.708 "trtype": "tcp", 00:15:35.708 "traddr": "10.0.0.2", 00:15:35.708 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:35.708 "adrfam": "ipv4", 00:15:35.708 "trsvcid": "8009", 00:15:35.708 "wait_for_attach": true, 00:15:35.708 "method": "bdev_nvme_start_discovery", 00:15:35.708 "req_id": 1 00:15:35.708 } 00:15:35.708 Got JSON-RPC error response 00:15:35.708 response: 00:15:35.708 { 00:15:35.708 "code": -17, 00:15:35.708 "message": "File exists" 00:15:35.708 } 00:15:35.708 06:45:49 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:35.708 06:45:49 -- common/autotest_common.sh@653 -- # es=1 00:15:35.708 06:45:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:35.708 06:45:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:35.708 06:45:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:35.708 06:45:49 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:15:35.708 06:45:49 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:35.708 06:45:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.708 06:45:49 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:35.708 06:45:49 -- host/discovery.sh@67 -- # sort 00:15:35.708 06:45:49 -- common/autotest_common.sh@10 -- # set +x 00:15:35.708 06:45:49 -- host/discovery.sh@67 -- # xargs 00:15:35.708 06:45:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.708 06:45:49 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:15:35.708 06:45:49 -- host/discovery.sh@147 -- # get_bdev_list 00:15:35.708 06:45:49 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:35.708 06:45:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.708 06:45:49 -- common/autotest_common.sh@10 -- # set +x 00:15:35.708 06:45:49 -- host/discovery.sh@55 -- # xargs 00:15:35.708 06:45:49 -- host/discovery.sh@55 -- # sort 00:15:35.708 06:45:49 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:35.967 06:45:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.967 06:45:49 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:35.967 06:45:49 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:35.967 06:45:49 -- common/autotest_common.sh@650 -- # local es=0 00:15:35.967 06:45:49 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:35.967 06:45:49 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:35.967 06:45:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:35.967 06:45:49 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:35.967 06:45:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:35.967 06:45:49 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:35.967 06:45:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.967 06:45:49 -- common/autotest_common.sh@10 -- # set +x 00:15:35.967 request: 00:15:35.967 { 00:15:35.967 "name": "nvme_second", 00:15:35.967 "trtype": "tcp", 00:15:35.967 "traddr": "10.0.0.2", 00:15:35.967 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:35.967 "adrfam": "ipv4", 00:15:35.967 "trsvcid": "8009", 00:15:35.967 "wait_for_attach": true, 00:15:35.967 "method": "bdev_nvme_start_discovery", 00:15:35.967 "req_id": 1 00:15:35.967 } 00:15:35.967 Got JSON-RPC error response 00:15:35.967 response: 00:15:35.967 { 00:15:35.967 "code": -17, 00:15:35.967 "message": "File exists" 00:15:35.967 } 00:15:35.967 06:45:49 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:35.967 06:45:49 -- common/autotest_common.sh@653 -- # es=1 00:15:35.967 06:45:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:35.967 06:45:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:35.967 06:45:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:35.967 06:45:49 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:15:35.967 06:45:49 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:35.967 06:45:49 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:35.967 06:45:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.967 06:45:49 -- host/discovery.sh@67 -- # sort 00:15:35.967 06:45:49 -- common/autotest_common.sh@10 -- # set +x 00:15:35.967 06:45:49 -- host/discovery.sh@67 -- # xargs 00:15:35.967 06:45:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.967 06:45:49 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:15:35.967 06:45:49 -- host/discovery.sh@153 -- # get_bdev_list 00:15:35.967 06:45:49 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:35.967 06:45:49 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:35.967 06:45:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.967 06:45:49 -- host/discovery.sh@55 -- # xargs 00:15:35.967 06:45:49 -- host/discovery.sh@55 -- # sort 00:15:35.967 06:45:49 -- common/autotest_common.sh@10 -- # set +x 00:15:35.967 06:45:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.967 06:45:49 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:35.967 06:45:49 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:35.967 06:45:49 -- common/autotest_common.sh@650 -- # local es=0 00:15:35.967 06:45:49 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:35.967 06:45:49 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:35.967 06:45:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:35.967 06:45:49 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:35.967 06:45:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:35.967 06:45:49 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:35.967 06:45:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.967 06:45:49 -- common/autotest_common.sh@10 -- # set +x 00:15:36.902 [2024-12-14 06:45:50.885615] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:36.902 [2024-12-14 06:45:50.885737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:36.902 [2024-12-14 06:45:50.885779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:36.902 [2024-12-14 06:45:50.885794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x194c270 with addr=10.0.0.2, port=8010 00:15:36.902 [2024-12-14 06:45:50.885811] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:36.902 [2024-12-14 06:45:50.885820] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:36.902 [2024-12-14 06:45:50.885830] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:38.302 [2024-12-14 06:45:51.885600] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:38.302 [2024-12-14 06:45:51.885704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:38.302 [2024-12-14 06:45:51.885743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:15:38.302 [2024-12-14 06:45:51.885758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x194c270 with addr=10.0.0.2, port=8010 00:15:38.302 [2024-12-14 06:45:51.885775] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:38.302 [2024-12-14 06:45:51.885784] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:38.302 [2024-12-14 06:45:51.885793] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:15:39.237 [2024-12-14 06:45:52.885471] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:15:39.237 request: 00:15:39.237 { 00:15:39.237 "name": "nvme_second", 00:15:39.237 "trtype": "tcp", 00:15:39.237 "traddr": "10.0.0.2", 00:15:39.237 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:39.237 "adrfam": "ipv4", 00:15:39.237 "trsvcid": "8010", 00:15:39.237 "attach_timeout_ms": 3000, 00:15:39.237 "method": "bdev_nvme_start_discovery", 00:15:39.237 "req_id": 1 00:15:39.237 } 00:15:39.237 Got JSON-RPC error response 00:15:39.237 response: 00:15:39.237 { 00:15:39.237 "code": -110, 00:15:39.237 "message": "Connection timed out" 00:15:39.237 } 00:15:39.237 06:45:52 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:39.237 06:45:52 -- common/autotest_common.sh@653 -- # es=1 00:15:39.237 06:45:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:39.237 06:45:52 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:39.237 06:45:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:39.237 06:45:52 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:15:39.237 06:45:52 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:39.237 06:45:52 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:39.237 06:45:52 -- host/discovery.sh@67 -- # sort 00:15:39.237 06:45:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.237 06:45:52 -- common/autotest_common.sh@10 -- # set +x 00:15:39.237 06:45:52 -- host/discovery.sh@67 -- # xargs 00:15:39.237 06:45:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.237 06:45:52 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:15:39.237 06:45:52 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:15:39.237 06:45:52 -- host/discovery.sh@162 -- # kill 70717 00:15:39.237 06:45:52 -- host/discovery.sh@163 -- # nvmftestfini 00:15:39.237 06:45:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:39.237 06:45:52 -- nvmf/common.sh@116 -- # sync 00:15:39.237 06:45:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:39.237 06:45:52 -- nvmf/common.sh@119 -- # set +e 00:15:39.237 06:45:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:39.237 06:45:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:39.237 rmmod nvme_tcp 00:15:39.237 rmmod nvme_fabrics 00:15:39.237 rmmod nvme_keyring 00:15:39.238 06:45:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:39.238 06:45:53 -- nvmf/common.sh@123 -- # set -e 00:15:39.238 06:45:53 -- nvmf/common.sh@124 -- # return 0 00:15:39.238 06:45:53 -- nvmf/common.sh@477 -- # '[' -n 70685 ']' 00:15:39.238 06:45:53 -- nvmf/common.sh@478 -- # killprocess 70685 00:15:39.238 06:45:53 -- common/autotest_common.sh@936 -- # '[' -z 70685 ']' 00:15:39.238 06:45:53 -- common/autotest_common.sh@940 -- # kill -0 70685 00:15:39.238 06:45:53 -- common/autotest_common.sh@941 -- # uname 00:15:39.238 06:45:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:39.238 06:45:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70685 00:15:39.238 killing process with pid 70685 00:15:39.238 06:45:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:39.238 06:45:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:39.238 06:45:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70685' 00:15:39.238 06:45:53 -- common/autotest_common.sh@955 -- # kill 70685 00:15:39.238 06:45:53 -- common/autotest_common.sh@960 -- # wait 70685 00:15:39.497 06:45:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:39.497 06:45:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:39.497 06:45:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:39.497 06:45:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:39.497 06:45:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:39.497 06:45:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.497 06:45:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.497 06:45:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.497 06:45:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:39.497 ************************************ 00:15:39.497 END TEST nvmf_discovery 00:15:39.497 ************************************ 00:15:39.497 00:15:39.497 real 0m14.087s 00:15:39.497 user 0m26.966s 00:15:39.497 sys 0m2.190s 00:15:39.497 06:45:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:39.497 06:45:53 -- common/autotest_common.sh@10 -- # set +x 00:15:39.497 06:45:53 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:39.497 06:45:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:39.497 06:45:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:39.497 06:45:53 -- common/autotest_common.sh@10 -- # set +x 00:15:39.497 ************************************ 00:15:39.497 START TEST nvmf_discovery_remove_ifc 00:15:39.497 ************************************ 00:15:39.497 06:45:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:39.497 * Looking for test storage... 00:15:39.497 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:39.497 06:45:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:39.497 06:45:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:39.497 06:45:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:39.756 06:45:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:39.756 06:45:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:39.756 06:45:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:39.756 06:45:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:39.756 06:45:53 -- scripts/common.sh@335 -- # IFS=.-: 00:15:39.756 06:45:53 -- scripts/common.sh@335 -- # read -ra ver1 00:15:39.756 06:45:53 -- scripts/common.sh@336 -- # IFS=.-: 00:15:39.756 06:45:53 -- scripts/common.sh@336 -- # read -ra ver2 00:15:39.756 06:45:53 -- scripts/common.sh@337 -- # local 'op=<' 00:15:39.756 06:45:53 -- scripts/common.sh@339 -- # ver1_l=2 00:15:39.756 06:45:53 -- scripts/common.sh@340 -- # ver2_l=1 00:15:39.756 06:45:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:39.756 06:45:53 -- scripts/common.sh@343 -- # case "$op" in 00:15:39.756 06:45:53 -- scripts/common.sh@344 -- # : 1 00:15:39.756 06:45:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:39.756 06:45:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:39.756 06:45:53 -- scripts/common.sh@364 -- # decimal 1 00:15:39.756 06:45:53 -- scripts/common.sh@352 -- # local d=1 00:15:39.756 06:45:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:39.756 06:45:53 -- scripts/common.sh@354 -- # echo 1 00:15:39.756 06:45:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:39.756 06:45:53 -- scripts/common.sh@365 -- # decimal 2 00:15:39.756 06:45:53 -- scripts/common.sh@352 -- # local d=2 00:15:39.756 06:45:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:39.756 06:45:53 -- scripts/common.sh@354 -- # echo 2 00:15:39.756 06:45:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:39.756 06:45:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:39.756 06:45:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:39.756 06:45:53 -- scripts/common.sh@367 -- # return 0 00:15:39.756 06:45:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:39.756 06:45:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:39.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.756 --rc genhtml_branch_coverage=1 00:15:39.756 --rc genhtml_function_coverage=1 00:15:39.756 --rc genhtml_legend=1 00:15:39.756 --rc geninfo_all_blocks=1 00:15:39.756 --rc geninfo_unexecuted_blocks=1 00:15:39.756 00:15:39.756 ' 00:15:39.756 06:45:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:39.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.756 --rc genhtml_branch_coverage=1 00:15:39.756 --rc genhtml_function_coverage=1 00:15:39.756 --rc genhtml_legend=1 00:15:39.756 --rc geninfo_all_blocks=1 00:15:39.756 --rc geninfo_unexecuted_blocks=1 00:15:39.756 00:15:39.756 ' 00:15:39.756 06:45:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:39.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.756 --rc genhtml_branch_coverage=1 00:15:39.756 --rc genhtml_function_coverage=1 00:15:39.756 --rc genhtml_legend=1 00:15:39.756 --rc geninfo_all_blocks=1 00:15:39.756 --rc geninfo_unexecuted_blocks=1 00:15:39.756 00:15:39.756 ' 00:15:39.756 06:45:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:39.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.756 --rc genhtml_branch_coverage=1 00:15:39.756 --rc genhtml_function_coverage=1 00:15:39.756 --rc genhtml_legend=1 00:15:39.756 --rc geninfo_all_blocks=1 00:15:39.756 --rc geninfo_unexecuted_blocks=1 00:15:39.756 00:15:39.756 ' 00:15:39.756 06:45:53 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:39.756 06:45:53 -- nvmf/common.sh@7 -- # uname -s 00:15:39.756 06:45:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.756 06:45:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.756 06:45:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.756 06:45:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.756 06:45:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.756 06:45:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.756 06:45:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.756 06:45:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.757 06:45:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.757 06:45:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.757 06:45:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 00:15:39.757 06:45:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=1897a557-42a7-4044-982a-fbab8b2b3e32 00:15:39.757 06:45:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.757 06:45:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.757 06:45:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:39.757 06:45:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:39.757 06:45:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.757 06:45:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.757 06:45:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.757 06:45:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.757 06:45:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.757 06:45:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.757 06:45:53 -- paths/export.sh@5 -- # export PATH 00:15:39.757 06:45:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.757 06:45:53 -- nvmf/common.sh@46 -- # : 0 00:15:39.757 06:45:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:39.757 06:45:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:39.757 06:45:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:39.757 06:45:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.757 06:45:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.757 06:45:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:39.757 06:45:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:39.757 06:45:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:39.757 06:45:53 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:15:39.757 06:45:53 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:15:39.757 06:45:53 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:15:39.757 06:45:53 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:39.757 06:45:53 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:15:39.757 06:45:53 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:15:39.757 06:45:53 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:15:39.757 06:45:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:39.757 06:45:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:39.757 06:45:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:39.757 06:45:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:39.757 06:45:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:39.757 06:45:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.757 06:45:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.757 06:45:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.757 06:45:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:39.757 06:45:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:39.757 06:45:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:39.757 06:45:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:39.757 06:45:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:39.757 06:45:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:39.757 06:45:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:39.757 06:45:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:39.757 06:45:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:39.757 06:45:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:39.757 06:45:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:39.757 06:45:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:39.757 06:45:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:39.757 06:45:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:39.757 06:45:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:39.757 06:45:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:39.757 06:45:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:39.757 06:45:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:39.757 06:45:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:39.757 06:45:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:39.757 Cannot find device "nvmf_tgt_br" 00:15:39.757 06:45:53 -- nvmf/common.sh@154 -- # true 00:15:39.757 06:45:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:39.757 Cannot find device "nvmf_tgt_br2" 00:15:39.757 06:45:53 -- nvmf/common.sh@155 -- # true 00:15:39.757 06:45:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:39.757 06:45:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:39.757 Cannot find device "nvmf_tgt_br" 00:15:39.757 06:45:53 -- nvmf/common.sh@157 -- # true 00:15:39.757 06:45:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:39.757 Cannot find device "nvmf_tgt_br2" 00:15:39.757 06:45:53 -- nvmf/common.sh@158 -- # true 00:15:39.757 06:45:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:39.757 06:45:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:39.757 06:45:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:39.757 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.757 06:45:53 -- nvmf/common.sh@161 -- # true 00:15:39.757 06:45:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:39.757 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.757 06:45:53 -- nvmf/common.sh@162 -- # true 00:15:39.757 06:45:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:39.757 06:45:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:39.757 06:45:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:39.757 06:45:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:39.757 06:45:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:39.757 06:45:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:40.016 06:45:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:40.016 06:45:53 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:40.016 06:45:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:40.016 06:45:53 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:40.016 06:45:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:40.016 06:45:53 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:40.016 06:45:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:40.016 06:45:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:40.016 06:45:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:40.016 06:45:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:40.016 06:45:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:40.016 06:45:53 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:40.016 06:45:53 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:40.016 06:45:53 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:40.016 06:45:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:40.016 06:45:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:40.016 06:45:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:40.016 06:45:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:40.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:40.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:15:40.016 00:15:40.016 --- 10.0.0.2 ping statistics --- 00:15:40.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.016 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:40.016 06:45:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:40.016 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:40.016 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:15:40.016 00:15:40.016 --- 10.0.0.3 ping statistics --- 00:15:40.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.016 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:15:40.016 06:45:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:40.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:40.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:40.016 00:15:40.016 --- 10.0.0.1 ping statistics --- 00:15:40.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.016 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:40.016 06:45:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:40.016 06:45:53 -- nvmf/common.sh@421 -- # return 0 00:15:40.016 06:45:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:40.016 06:45:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:40.016 06:45:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:40.016 06:45:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:40.016 06:45:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:40.017 06:45:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:40.017 06:45:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:40.017 06:45:53 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:15:40.017 06:45:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:40.017 06:45:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:40.017 06:45:53 -- common/autotest_common.sh@10 -- # set +x 00:15:40.017 06:45:53 -- nvmf/common.sh@469 -- # nvmfpid=71222 00:15:40.017 06:45:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:40.017 06:45:53 -- nvmf/common.sh@470 -- # waitforlisten 71222 00:15:40.017 06:45:53 -- common/autotest_common.sh@829 -- # '[' -z 71222 ']' 00:15:40.017 06:45:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.017 06:45:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:40.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.017 06:45:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.017 06:45:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:40.017 06:45:53 -- common/autotest_common.sh@10 -- # set +x 00:15:40.017 [2024-12-14 06:45:53.944566] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:40.017 [2024-12-14 06:45:53.944866] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.275 [2024-12-14 06:45:54.082975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.275 [2024-12-14 06:45:54.138431] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:40.275 [2024-12-14 06:45:54.138629] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.275 [2024-12-14 06:45:54.138643] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.275 [2024-12-14 06:45:54.138650] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.275 [2024-12-14 06:45:54.138679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.211 06:45:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:41.211 06:45:54 -- common/autotest_common.sh@862 -- # return 0 00:15:41.211 06:45:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:41.211 06:45:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:41.211 06:45:54 -- common/autotest_common.sh@10 -- # set +x 00:15:41.211 06:45:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:41.211 06:45:54 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:15:41.211 06:45:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.211 06:45:54 -- common/autotest_common.sh@10 -- # set +x 00:15:41.211 [2024-12-14 06:45:54.972201] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:41.211 [2024-12-14 06:45:54.980385] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:41.211 null0 00:15:41.211 [2024-12-14 06:45:55.012266] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:41.211 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:41.211 06:45:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.211 06:45:55 -- host/discovery_remove_ifc.sh@59 -- # hostpid=71254 00:15:41.211 06:45:55 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:15:41.211 06:45:55 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 71254 /tmp/host.sock 00:15:41.211 06:45:55 -- common/autotest_common.sh@829 -- # '[' -z 71254 ']' 00:15:41.211 06:45:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:15:41.211 06:45:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:41.211 06:45:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:41.211 06:45:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:41.211 06:45:55 -- common/autotest_common.sh@10 -- # set +x 00:15:41.211 [2024-12-14 06:45:55.079570] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:41.211 [2024-12-14 06:45:55.080054] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71254 ] 00:15:41.469 [2024-12-14 06:45:55.217518] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.469 [2024-12-14 06:45:55.285452] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:41.469 [2024-12-14 06:45:55.285933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.469 06:45:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:41.469 06:45:55 -- common/autotest_common.sh@862 -- # return 0 00:15:41.469 06:45:55 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:41.469 06:45:55 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:15:41.469 06:45:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.469 06:45:55 -- common/autotest_common.sh@10 -- # set +x 00:15:41.469 06:45:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.469 06:45:55 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:15:41.469 06:45:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.469 06:45:55 -- common/autotest_common.sh@10 -- # set +x 00:15:41.469 06:45:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.469 06:45:55 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:15:41.469 06:45:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.469 06:45:55 -- common/autotest_common.sh@10 -- # set +x 00:15:42.844 [2024-12-14 06:45:56.415037] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:42.844 [2024-12-14 06:45:56.415261] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:42.844 [2024-12-14 06:45:56.415296] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:42.844 [2024-12-14 06:45:56.421075] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:15:42.844 [2024-12-14 06:45:56.476602] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:42.844 [2024-12-14 06:45:56.476647] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:42.844 [2024-12-14 06:45:56.476672] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:42.844 [2024-12-14 06:45:56.476686] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:42.844 [2024-12-14 06:45:56.476710] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:42.844 06:45:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.844 06:45:56 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:15:42.844 06:45:56 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:42.844 06:45:56 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:42.844 06:45:56 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:42.844 [2024-12-14 06:45:56.483787] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2462be0 was disconnected and freed. delete nvme_qpair. 00:15:42.844 06:45:56 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:42.844 06:45:56 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:42.844 06:45:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.844 06:45:56 -- common/autotest_common.sh@10 -- # set +x 00:15:42.844 06:45:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.844 06:45:56 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:15:42.844 06:45:56 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:15:42.844 06:45:56 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:15:42.844 06:45:56 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:15:42.844 06:45:56 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:42.844 06:45:56 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:42.844 06:45:56 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:42.844 06:45:56 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:42.844 06:45:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.844 06:45:56 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:42.844 06:45:56 -- common/autotest_common.sh@10 -- # set +x 00:15:42.844 06:45:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.844 06:45:56 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:42.844 06:45:56 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:43.780 06:45:57 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:43.780 06:45:57 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:43.780 06:45:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.780 06:45:57 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:43.780 06:45:57 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:43.780 06:45:57 -- common/autotest_common.sh@10 -- # set +x 00:15:43.780 06:45:57 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:43.780 06:45:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.780 06:45:57 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:43.780 06:45:57 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:44.715 06:45:58 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:44.715 06:45:58 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:44.715 06:45:58 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:44.715 06:45:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.715 06:45:58 -- common/autotest_common.sh@10 -- # set +x 00:15:44.715 06:45:58 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:44.715 06:45:58 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:44.715 06:45:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.973 06:45:58 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:44.973 06:45:58 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:45.909 06:45:59 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:45.909 06:45:59 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:45.909 06:45:59 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:45.909 06:45:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.909 06:45:59 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:45.909 06:45:59 -- common/autotest_common.sh@10 -- # set +x 00:15:45.910 06:45:59 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:45.910 06:45:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.910 06:45:59 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:45.910 06:45:59 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:46.846 06:46:00 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:46.846 06:46:00 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:46.846 06:46:00 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:46.846 06:46:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.846 06:46:00 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:46.846 06:46:00 -- common/autotest_common.sh@10 -- # set +x 00:15:46.846 06:46:00 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:46.846 06:46:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.104 06:46:00 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:47.104 06:46:00 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:48.038 06:46:01 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:48.038 06:46:01 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:48.038 06:46:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.038 06:46:01 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:48.038 06:46:01 -- common/autotest_common.sh@10 -- # set +x 00:15:48.038 06:46:01 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:48.038 06:46:01 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:48.038 06:46:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.038 06:46:01 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:48.038 06:46:01 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:48.038 [2024-12-14 06:46:01.915009] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:15:48.038 [2024-12-14 06:46:01.915225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:48.038 [2024-12-14 06:46:01.915245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:48.038 [2024-12-14 06:46:01.915273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:48.038 [2024-12-14 06:46:01.915283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:48.038 [2024-12-14 06:46:01.915292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:48.038 [2024-12-14 06:46:01.915303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:48.038 [2024-12-14 06:46:01.915313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:48.038 [2024-12-14 06:46:01.915322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:48.038 [2024-12-14 06:46:01.915346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:48.038 [2024-12-14 06:46:01.915355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:48.038 [2024-12-14 06:46:01.915364] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7de0 is same with the state(5) to be set 00:15:48.038 [2024-12-14 06:46:01.925006] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7de0 (9): Bad file descriptor 00:15:48.038 [2024-12-14 06:46:01.935022] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:48.973 06:46:02 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:48.974 06:46:02 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:48.974 06:46:02 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:48.974 06:46:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.974 06:46:02 -- common/autotest_common.sh@10 -- # set +x 00:15:48.974 06:46:02 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:48.974 06:46:02 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:49.232 [2024-12-14 06:46:03.000018] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:15:50.167 [2024-12-14 06:46:04.027909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:15:51.103 [2024-12-14 06:46:05.048016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:15:51.103 [2024-12-14 06:46:05.048429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23d7de0 with addr=10.0.0.2, port=4420 00:15:51.103 [2024-12-14 06:46:05.048744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d7de0 is same with the state(5) to be set 00:15:51.103 [2024-12-14 06:46:05.049181] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:15:51.103 [2024-12-14 06:46:05.049643] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:15:51.103 [2024-12-14 06:46:05.049700] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:15:51.103 [2024-12-14 06:46:05.049722] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:15:51.103 [2024-12-14 06:46:05.050518] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d7de0 (9): Bad file descriptor 00:15:51.103 [2024-12-14 06:46:05.050585] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:15:51.103 [2024-12-14 06:46:05.050634] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:15:51.103 [2024-12-14 06:46:05.050704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.103 [2024-12-14 06:46:05.050733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.103 [2024-12-14 06:46:05.050759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.103 [2024-12-14 06:46:05.050779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.103 [2024-12-14 06:46:05.050801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.103 [2024-12-14 06:46:05.050821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.103 [2024-12-14 06:46:05.050842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.103 [2024-12-14 06:46:05.050862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.103 [2024-12-14 06:46:05.050978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.103 [2024-12-14 06:46:05.051006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.103 [2024-12-14 06:46:05.051028] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:15:51.103 [2024-12-14 06:46:05.051060] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d81f0 (9): Bad file descriptor 00:15:51.103 [2024-12-14 06:46:05.051611] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:15:51.103 [2024-12-14 06:46:05.051638] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:15:51.103 06:46:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.103 06:46:05 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:51.103 06:46:05 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:52.478 06:46:06 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:52.478 06:46:06 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:52.478 06:46:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.478 06:46:06 -- common/autotest_common.sh@10 -- # set +x 00:15:52.478 06:46:06 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:52.478 06:46:06 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:52.478 06:46:06 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:52.478 06:46:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.478 06:46:06 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:15:52.478 06:46:06 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:52.478 06:46:06 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:52.478 06:46:06 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:15:52.478 06:46:06 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:52.478 06:46:06 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:52.478 06:46:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.478 06:46:06 -- common/autotest_common.sh@10 -- # set +x 00:15:52.478 06:46:06 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:52.478 06:46:06 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:52.478 06:46:06 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:52.478 06:46:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.478 06:46:06 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:15:52.478 06:46:06 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:53.413 [2024-12-14 06:46:07.060163] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:53.413 [2024-12-14 06:46:07.060193] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:53.413 [2024-12-14 06:46:07.060211] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:53.413 [2024-12-14 06:46:07.066199] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:15:53.413 [2024-12-14 06:46:07.121047] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:53.413 [2024-12-14 06:46:07.121256] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:53.413 [2024-12-14 06:46:07.121336] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:53.413 [2024-12-14 06:46:07.121443] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:15:53.413 [2024-12-14 06:46:07.121502] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:53.413 [2024-12-14 06:46:07.128890] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2419ce0 was disconnected and freed. delete nvme_qpair. 00:15:53.413 06:46:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:53.413 06:46:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:53.413 06:46:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:53.413 06:46:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.413 06:46:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:53.413 06:46:07 -- common/autotest_common.sh@10 -- # set +x 00:15:53.413 06:46:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:53.413 06:46:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.413 06:46:07 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:15:53.413 06:46:07 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:15:53.413 06:46:07 -- host/discovery_remove_ifc.sh@90 -- # killprocess 71254 00:15:53.413 06:46:07 -- common/autotest_common.sh@936 -- # '[' -z 71254 ']' 00:15:53.413 06:46:07 -- common/autotest_common.sh@940 -- # kill -0 71254 00:15:53.413 06:46:07 -- common/autotest_common.sh@941 -- # uname 00:15:53.413 06:46:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:53.413 06:46:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71254 00:15:53.413 killing process with pid 71254 00:15:53.413 06:46:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:53.413 06:46:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:53.413 06:46:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71254' 00:15:53.413 06:46:07 -- common/autotest_common.sh@955 -- # kill 71254 00:15:53.413 06:46:07 -- common/autotest_common.sh@960 -- # wait 71254 00:15:53.672 06:46:07 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:15:53.672 06:46:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:53.672 06:46:07 -- nvmf/common.sh@116 -- # sync 00:15:53.672 06:46:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:53.672 06:46:07 -- nvmf/common.sh@119 -- # set +e 00:15:53.672 06:46:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:53.672 06:46:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:53.672 rmmod nvme_tcp 00:15:53.672 rmmod nvme_fabrics 00:15:53.672 rmmod nvme_keyring 00:15:53.672 06:46:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:53.672 06:46:07 -- nvmf/common.sh@123 -- # set -e 00:15:53.672 06:46:07 -- nvmf/common.sh@124 -- # return 0 00:15:53.672 06:46:07 -- nvmf/common.sh@477 -- # '[' -n 71222 ']' 00:15:53.672 06:46:07 -- nvmf/common.sh@478 -- # killprocess 71222 00:15:53.672 06:46:07 -- common/autotest_common.sh@936 -- # '[' -z 71222 ']' 00:15:53.672 06:46:07 -- common/autotest_common.sh@940 -- # kill -0 71222 00:15:53.672 06:46:07 -- common/autotest_common.sh@941 -- # uname 00:15:53.672 06:46:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:53.672 06:46:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71222 00:15:53.672 killing process with pid 71222 00:15:53.672 06:46:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:53.672 06:46:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:53.672 06:46:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71222' 00:15:53.672 06:46:07 -- common/autotest_common.sh@955 -- # kill 71222 00:15:53.672 06:46:07 -- common/autotest_common.sh@960 -- # wait 71222 00:15:53.931 06:46:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:53.931 06:46:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:53.931 06:46:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:53.931 06:46:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:53.931 06:46:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:53.931 06:46:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.931 06:46:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.931 06:46:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.931 06:46:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:53.931 00:15:53.931 real 0m14.474s 00:15:53.931 user 0m22.874s 00:15:53.931 sys 0m2.317s 00:15:53.931 06:46:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:53.931 06:46:07 -- common/autotest_common.sh@10 -- # set +x 00:15:53.931 ************************************ 00:15:53.931 END TEST nvmf_discovery_remove_ifc 00:15:53.931 ************************************ 00:15:53.931 06:46:07 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:15:53.931 06:46:07 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:15:53.931 06:46:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:53.931 06:46:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:53.931 06:46:07 -- common/autotest_common.sh@10 -- # set +x 00:15:53.931 ************************************ 00:15:53.931 START TEST nvmf_digest 00:15:53.931 ************************************ 00:15:53.931 06:46:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:15:54.190 * Looking for test storage... 00:15:54.190 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:54.190 06:46:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:54.190 06:46:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:54.190 06:46:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:54.190 06:46:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:54.190 06:46:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:54.190 06:46:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:54.190 06:46:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:54.190 06:46:08 -- scripts/common.sh@335 -- # IFS=.-: 00:15:54.190 06:46:08 -- scripts/common.sh@335 -- # read -ra ver1 00:15:54.190 06:46:08 -- scripts/common.sh@336 -- # IFS=.-: 00:15:54.190 06:46:08 -- scripts/common.sh@336 -- # read -ra ver2 00:15:54.190 06:46:08 -- scripts/common.sh@337 -- # local 'op=<' 00:15:54.190 06:46:08 -- scripts/common.sh@339 -- # ver1_l=2 00:15:54.190 06:46:08 -- scripts/common.sh@340 -- # ver2_l=1 00:15:54.190 06:46:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:54.190 06:46:08 -- scripts/common.sh@343 -- # case "$op" in 00:15:54.190 06:46:08 -- scripts/common.sh@344 -- # : 1 00:15:54.190 06:46:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:54.190 06:46:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:54.190 06:46:08 -- scripts/common.sh@364 -- # decimal 1 00:15:54.190 06:46:08 -- scripts/common.sh@352 -- # local d=1 00:15:54.190 06:46:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:54.190 06:46:08 -- scripts/common.sh@354 -- # echo 1 00:15:54.190 06:46:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:54.190 06:46:08 -- scripts/common.sh@365 -- # decimal 2 00:15:54.190 06:46:08 -- scripts/common.sh@352 -- # local d=2 00:15:54.190 06:46:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:54.190 06:46:08 -- scripts/common.sh@354 -- # echo 2 00:15:54.190 06:46:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:54.190 06:46:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:54.190 06:46:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:54.190 06:46:08 -- scripts/common.sh@367 -- # return 0 00:15:54.190 06:46:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:54.190 06:46:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:54.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.190 --rc genhtml_branch_coverage=1 00:15:54.190 --rc genhtml_function_coverage=1 00:15:54.190 --rc genhtml_legend=1 00:15:54.190 --rc geninfo_all_blocks=1 00:15:54.190 --rc geninfo_unexecuted_blocks=1 00:15:54.190 00:15:54.190 ' 00:15:54.190 06:46:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:54.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.190 --rc genhtml_branch_coverage=1 00:15:54.190 --rc genhtml_function_coverage=1 00:15:54.190 --rc genhtml_legend=1 00:15:54.190 --rc geninfo_all_blocks=1 00:15:54.190 --rc geninfo_unexecuted_blocks=1 00:15:54.190 00:15:54.190 ' 00:15:54.190 06:46:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:54.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.190 --rc genhtml_branch_coverage=1 00:15:54.190 --rc genhtml_function_coverage=1 00:15:54.190 --rc genhtml_legend=1 00:15:54.190 --rc geninfo_all_blocks=1 00:15:54.190 --rc geninfo_unexecuted_blocks=1 00:15:54.190 00:15:54.190 ' 00:15:54.190 06:46:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:54.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.190 --rc genhtml_branch_coverage=1 00:15:54.190 --rc genhtml_function_coverage=1 00:15:54.190 --rc genhtml_legend=1 00:15:54.190 --rc geninfo_all_blocks=1 00:15:54.190 --rc geninfo_unexecuted_blocks=1 00:15:54.190 00:15:54.190 ' 00:15:54.190 06:46:08 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:54.190 06:46:08 -- nvmf/common.sh@7 -- # uname -s 00:15:54.190 06:46:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.190 06:46:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.190 06:46:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.190 06:46:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.190 06:46:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.190 06:46:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.190 06:46:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.190 06:46:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.190 06:46:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.190 06:46:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.190 06:46:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 00:15:54.190 06:46:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=1897a557-42a7-4044-982a-fbab8b2b3e32 00:15:54.190 06:46:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.190 06:46:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.190 06:46:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:54.190 06:46:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:54.190 06:46:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.190 06:46:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.190 06:46:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.191 06:46:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.191 06:46:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.191 06:46:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.191 06:46:08 -- paths/export.sh@5 -- # export PATH 00:15:54.191 06:46:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.191 06:46:08 -- nvmf/common.sh@46 -- # : 0 00:15:54.191 06:46:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:54.191 06:46:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:54.191 06:46:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:54.191 06:46:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.191 06:46:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.191 06:46:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:54.191 06:46:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:54.191 06:46:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:54.191 06:46:08 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:54.191 06:46:08 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:15:54.191 06:46:08 -- host/digest.sh@16 -- # runtime=2 00:15:54.191 06:46:08 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:15:54.191 06:46:08 -- host/digest.sh@132 -- # nvmftestinit 00:15:54.191 06:46:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:54.191 06:46:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:54.191 06:46:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:54.191 06:46:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:54.191 06:46:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:54.191 06:46:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.191 06:46:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:54.191 06:46:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.191 06:46:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:54.191 06:46:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:54.191 06:46:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:54.191 06:46:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:54.191 06:46:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:54.191 06:46:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:54.191 06:46:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:54.191 06:46:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:54.191 06:46:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:54.191 06:46:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:54.191 06:46:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:54.191 06:46:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:54.191 06:46:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:54.191 06:46:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:54.191 06:46:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:54.191 06:46:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:54.191 06:46:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:54.191 06:46:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:54.191 06:46:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:54.191 06:46:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:54.191 Cannot find device "nvmf_tgt_br" 00:15:54.191 06:46:08 -- nvmf/common.sh@154 -- # true 00:15:54.191 06:46:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:54.191 Cannot find device "nvmf_tgt_br2" 00:15:54.191 06:46:08 -- nvmf/common.sh@155 -- # true 00:15:54.191 06:46:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:54.191 06:46:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:54.191 Cannot find device "nvmf_tgt_br" 00:15:54.191 06:46:08 -- nvmf/common.sh@157 -- # true 00:15:54.191 06:46:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:54.191 Cannot find device "nvmf_tgt_br2" 00:15:54.191 06:46:08 -- nvmf/common.sh@158 -- # true 00:15:54.191 06:46:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:54.450 06:46:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:54.450 06:46:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:54.450 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:54.450 06:46:08 -- nvmf/common.sh@161 -- # true 00:15:54.450 06:46:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:54.450 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:54.450 06:46:08 -- nvmf/common.sh@162 -- # true 00:15:54.450 06:46:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:54.450 06:46:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:54.450 06:46:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:54.450 06:46:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:54.450 06:46:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:54.450 06:46:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:54.450 06:46:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:54.450 06:46:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:54.450 06:46:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:54.450 06:46:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:54.450 06:46:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:54.450 06:46:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:54.450 06:46:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:54.450 06:46:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:54.450 06:46:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:54.450 06:46:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:54.450 06:46:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:54.450 06:46:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:54.450 06:46:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:54.450 06:46:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:54.450 06:46:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:54.450 06:46:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:54.450 06:46:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:54.450 06:46:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:54.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:54.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:15:54.450 00:15:54.450 --- 10.0.0.2 ping statistics --- 00:15:54.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.450 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:54.450 06:46:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:54.451 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:54.451 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:15:54.451 00:15:54.451 --- 10.0.0.3 ping statistics --- 00:15:54.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.451 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:15:54.451 06:46:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:54.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:54.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:54.451 00:15:54.451 --- 10.0.0.1 ping statistics --- 00:15:54.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.451 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:54.451 06:46:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:54.451 06:46:08 -- nvmf/common.sh@421 -- # return 0 00:15:54.451 06:46:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:54.451 06:46:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:54.451 06:46:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:54.451 06:46:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:54.451 06:46:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:54.451 06:46:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:54.451 06:46:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:54.451 06:46:08 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:54.451 06:46:08 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:15:54.451 06:46:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:54.451 06:46:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:54.451 06:46:08 -- common/autotest_common.sh@10 -- # set +x 00:15:54.710 ************************************ 00:15:54.710 START TEST nvmf_digest_clean 00:15:54.710 ************************************ 00:15:54.710 06:46:08 -- common/autotest_common.sh@1114 -- # run_digest 00:15:54.710 06:46:08 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:15:54.710 06:46:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:54.710 06:46:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:54.710 06:46:08 -- common/autotest_common.sh@10 -- # set +x 00:15:54.710 06:46:08 -- nvmf/common.sh@469 -- # nvmfpid=71663 00:15:54.710 06:46:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:15:54.710 06:46:08 -- nvmf/common.sh@470 -- # waitforlisten 71663 00:15:54.710 06:46:08 -- common/autotest_common.sh@829 -- # '[' -z 71663 ']' 00:15:54.710 06:46:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.710 06:46:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:54.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.710 06:46:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.710 06:46:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:54.710 06:46:08 -- common/autotest_common.sh@10 -- # set +x 00:15:54.710 [2024-12-14 06:46:08.502374] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:54.710 [2024-12-14 06:46:08.502479] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:54.710 [2024-12-14 06:46:08.638514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.710 [2024-12-14 06:46:08.693038] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:54.710 [2024-12-14 06:46:08.693163] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:54.710 [2024-12-14 06:46:08.693174] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:54.710 [2024-12-14 06:46:08.693183] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:54.710 [2024-12-14 06:46:08.693212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.969 06:46:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:54.969 06:46:08 -- common/autotest_common.sh@862 -- # return 0 00:15:54.969 06:46:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:54.969 06:46:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:54.969 06:46:08 -- common/autotest_common.sh@10 -- # set +x 00:15:54.969 06:46:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:54.969 06:46:08 -- host/digest.sh@120 -- # common_target_config 00:15:54.969 06:46:08 -- host/digest.sh@43 -- # rpc_cmd 00:15:54.969 06:46:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.969 06:46:08 -- common/autotest_common.sh@10 -- # set +x 00:15:54.969 null0 00:15:54.969 [2024-12-14 06:46:08.852847] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:54.969 [2024-12-14 06:46:08.876994] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:54.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:15:54.969 06:46:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.969 06:46:08 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:15:54.969 06:46:08 -- host/digest.sh@77 -- # local rw bs qd 00:15:54.969 06:46:08 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:15:54.969 06:46:08 -- host/digest.sh@80 -- # rw=randread 00:15:54.969 06:46:08 -- host/digest.sh@80 -- # bs=4096 00:15:54.969 06:46:08 -- host/digest.sh@80 -- # qd=128 00:15:54.969 06:46:08 -- host/digest.sh@82 -- # bperfpid=71692 00:15:54.969 06:46:08 -- host/digest.sh@83 -- # waitforlisten 71692 /var/tmp/bperf.sock 00:15:54.969 06:46:08 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:15:54.969 06:46:08 -- common/autotest_common.sh@829 -- # '[' -z 71692 ']' 00:15:54.969 06:46:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:15:54.969 06:46:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:54.969 06:46:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:15:54.969 06:46:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:54.969 06:46:08 -- common/autotest_common.sh@10 -- # set +x 00:15:54.969 [2024-12-14 06:46:08.935510] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:54.969 [2024-12-14 06:46:08.936313] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71692 ] 00:15:55.228 [2024-12-14 06:46:09.076140] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.228 [2024-12-14 06:46:09.145699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.228 06:46:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:55.228 06:46:09 -- common/autotest_common.sh@862 -- # return 0 00:15:55.228 06:46:09 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:15:55.228 06:46:09 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:15:55.228 06:46:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:15:55.795 06:46:09 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:55.795 06:46:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:56.054 nvme0n1 00:15:56.054 06:46:09 -- host/digest.sh@91 -- # bperf_py perform_tests 00:15:56.054 06:46:09 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:15:56.054 Running I/O for 2 seconds... 00:15:57.974 00:15:57.974 Latency(us) 00:15:57.974 [2024-12-14T06:46:11.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.974 [2024-12-14T06:46:11.966Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:15:57.974 nvme0n1 : 2.00 16424.34 64.16 0.00 0.00 7788.10 6881.28 21090.68 00:15:57.974 [2024-12-14T06:46:11.966Z] =================================================================================================================== 00:15:57.974 [2024-12-14T06:46:11.966Z] Total : 16424.34 64.16 0.00 0.00 7788.10 6881.28 21090.68 00:15:57.974 0 00:15:57.974 06:46:11 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:15:57.974 06:46:11 -- host/digest.sh@92 -- # get_accel_stats 00:15:57.974 06:46:11 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:15:57.974 | select(.opcode=="crc32c") 00:15:57.974 | "\(.module_name) \(.executed)"' 00:15:57.974 06:46:11 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:15:57.974 06:46:11 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:15:58.542 06:46:12 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:15:58.542 06:46:12 -- host/digest.sh@93 -- # exp_module=software 00:15:58.542 06:46:12 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:15:58.542 06:46:12 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:15:58.542 06:46:12 -- host/digest.sh@97 -- # killprocess 71692 00:15:58.542 06:46:12 -- common/autotest_common.sh@936 -- # '[' -z 71692 ']' 00:15:58.542 06:46:12 -- common/autotest_common.sh@940 -- # kill -0 71692 00:15:58.542 06:46:12 -- common/autotest_common.sh@941 -- # uname 00:15:58.542 06:46:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:58.542 06:46:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71692 00:15:58.542 06:46:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:58.542 06:46:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:58.542 killing process with pid 71692 00:15:58.542 Received shutdown signal, test time was about 2.000000 seconds 00:15:58.542 00:15:58.542 Latency(us) 00:15:58.542 [2024-12-14T06:46:12.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.542 [2024-12-14T06:46:12.534Z] =================================================================================================================== 00:15:58.542 [2024-12-14T06:46:12.534Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:58.542 06:46:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71692' 00:15:58.542 06:46:12 -- common/autotest_common.sh@955 -- # kill 71692 00:15:58.542 06:46:12 -- common/autotest_common.sh@960 -- # wait 71692 00:15:58.542 06:46:12 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:15:58.542 06:46:12 -- host/digest.sh@77 -- # local rw bs qd 00:15:58.543 06:46:12 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:15:58.543 06:46:12 -- host/digest.sh@80 -- # rw=randread 00:15:58.543 06:46:12 -- host/digest.sh@80 -- # bs=131072 00:15:58.543 06:46:12 -- host/digest.sh@80 -- # qd=16 00:15:58.543 06:46:12 -- host/digest.sh@82 -- # bperfpid=71740 00:15:58.543 06:46:12 -- host/digest.sh@83 -- # waitforlisten 71740 /var/tmp/bperf.sock 00:15:58.543 06:46:12 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:15:58.543 06:46:12 -- common/autotest_common.sh@829 -- # '[' -z 71740 ']' 00:15:58.543 06:46:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:15:58.543 06:46:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:58.543 06:46:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:15:58.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:15:58.543 06:46:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:58.543 06:46:12 -- common/autotest_common.sh@10 -- # set +x 00:15:58.543 [2024-12-14 06:46:12.495458] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:58.543 [2024-12-14 06:46:12.495748] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71740 ] 00:15:58.543 I/O size of 131072 is greater than zero copy threshold (65536). 00:15:58.543 Zero copy mechanism will not be used. 00:15:58.802 [2024-12-14 06:46:12.634362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.802 [2024-12-14 06:46:12.687873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.802 06:46:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:58.802 06:46:12 -- common/autotest_common.sh@862 -- # return 0 00:15:58.802 06:46:12 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:15:58.802 06:46:12 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:15:58.802 06:46:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:15:59.061 06:46:12 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:59.061 06:46:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:15:59.320 nvme0n1 00:15:59.320 06:46:13 -- host/digest.sh@91 -- # bperf_py perform_tests 00:15:59.320 06:46:13 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:15:59.579 I/O size of 131072 is greater than zero copy threshold (65536). 00:15:59.579 Zero copy mechanism will not be used. 00:15:59.579 Running I/O for 2 seconds... 00:16:01.484 00:16:01.484 Latency(us) 00:16:01.484 [2024-12-14T06:46:15.476Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.484 [2024-12-14T06:46:15.476Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:01.484 nvme0n1 : 2.00 7930.56 991.32 0.00 0.00 2014.61 1757.56 7179.17 00:16:01.484 [2024-12-14T06:46:15.476Z] =================================================================================================================== 00:16:01.484 [2024-12-14T06:46:15.476Z] Total : 7930.56 991.32 0.00 0.00 2014.61 1757.56 7179.17 00:16:01.484 0 00:16:01.484 06:46:15 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:01.484 06:46:15 -- host/digest.sh@92 -- # get_accel_stats 00:16:01.484 06:46:15 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:01.484 06:46:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:01.484 06:46:15 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:01.484 | select(.opcode=="crc32c") 00:16:01.484 | "\(.module_name) \(.executed)"' 00:16:01.743 06:46:15 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:01.743 06:46:15 -- host/digest.sh@93 -- # exp_module=software 00:16:01.743 06:46:15 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:01.743 06:46:15 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:01.743 06:46:15 -- host/digest.sh@97 -- # killprocess 71740 00:16:01.743 06:46:15 -- common/autotest_common.sh@936 -- # '[' -z 71740 ']' 00:16:01.743 06:46:15 -- common/autotest_common.sh@940 -- # kill -0 71740 00:16:01.743 06:46:15 -- common/autotest_common.sh@941 -- # uname 00:16:01.743 06:46:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:01.743 06:46:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71740 00:16:01.743 killing process with pid 71740 00:16:01.743 Received shutdown signal, test time was about 2.000000 seconds 00:16:01.743 00:16:01.743 Latency(us) 00:16:01.743 [2024-12-14T06:46:15.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.743 [2024-12-14T06:46:15.735Z] =================================================================================================================== 00:16:01.743 [2024-12-14T06:46:15.735Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:01.743 06:46:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:01.743 06:46:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:01.743 06:46:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71740' 00:16:01.743 06:46:15 -- common/autotest_common.sh@955 -- # kill 71740 00:16:01.743 06:46:15 -- common/autotest_common.sh@960 -- # wait 71740 00:16:02.002 06:46:15 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:16:02.002 06:46:15 -- host/digest.sh@77 -- # local rw bs qd 00:16:02.002 06:46:15 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:02.002 06:46:15 -- host/digest.sh@80 -- # rw=randwrite 00:16:02.002 06:46:15 -- host/digest.sh@80 -- # bs=4096 00:16:02.002 06:46:15 -- host/digest.sh@80 -- # qd=128 00:16:02.002 06:46:15 -- host/digest.sh@82 -- # bperfpid=71787 00:16:02.002 06:46:15 -- host/digest.sh@83 -- # waitforlisten 71787 /var/tmp/bperf.sock 00:16:02.002 06:46:15 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:02.002 06:46:15 -- common/autotest_common.sh@829 -- # '[' -z 71787 ']' 00:16:02.002 06:46:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:02.002 06:46:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:02.002 06:46:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:02.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:02.002 06:46:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:02.002 06:46:15 -- common/autotest_common.sh@10 -- # set +x 00:16:02.002 [2024-12-14 06:46:15.918150] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:02.002 [2024-12-14 06:46:15.918442] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71787 ] 00:16:02.261 [2024-12-14 06:46:16.055819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.261 [2024-12-14 06:46:16.109253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:02.261 06:46:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:02.261 06:46:16 -- common/autotest_common.sh@862 -- # return 0 00:16:02.261 06:46:16 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:02.261 06:46:16 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:02.261 06:46:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:02.520 06:46:16 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:02.520 06:46:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:03.088 nvme0n1 00:16:03.088 06:46:16 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:03.088 06:46:16 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:03.088 Running I/O for 2 seconds... 00:16:04.992 00:16:04.992 Latency(us) 00:16:04.992 [2024-12-14T06:46:18.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.992 [2024-12-14T06:46:18.984Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:04.992 nvme0n1 : 2.00 17491.55 68.33 0.00 0.00 7311.66 5779.08 15609.48 00:16:04.992 [2024-12-14T06:46:18.984Z] =================================================================================================================== 00:16:04.992 [2024-12-14T06:46:18.984Z] Total : 17491.55 68.33 0.00 0.00 7311.66 5779.08 15609.48 00:16:04.992 0 00:16:04.992 06:46:18 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:04.992 06:46:18 -- host/digest.sh@92 -- # get_accel_stats 00:16:04.992 06:46:18 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:04.992 06:46:18 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:04.992 06:46:18 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:04.992 | select(.opcode=="crc32c") 00:16:04.992 | "\(.module_name) \(.executed)"' 00:16:05.251 06:46:19 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:05.251 06:46:19 -- host/digest.sh@93 -- # exp_module=software 00:16:05.251 06:46:19 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:05.251 06:46:19 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:05.251 06:46:19 -- host/digest.sh@97 -- # killprocess 71787 00:16:05.251 06:46:19 -- common/autotest_common.sh@936 -- # '[' -z 71787 ']' 00:16:05.251 06:46:19 -- common/autotest_common.sh@940 -- # kill -0 71787 00:16:05.251 06:46:19 -- common/autotest_common.sh@941 -- # uname 00:16:05.251 06:46:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:05.251 06:46:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71787 00:16:05.510 06:46:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:05.510 06:46:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:05.510 06:46:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71787' 00:16:05.510 killing process with pid 71787 00:16:05.510 Received shutdown signal, test time was about 2.000000 seconds 00:16:05.510 00:16:05.510 Latency(us) 00:16:05.510 [2024-12-14T06:46:19.502Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.510 [2024-12-14T06:46:19.502Z] =================================================================================================================== 00:16:05.510 [2024-12-14T06:46:19.502Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:05.510 06:46:19 -- common/autotest_common.sh@955 -- # kill 71787 00:16:05.510 06:46:19 -- common/autotest_common.sh@960 -- # wait 71787 00:16:05.510 06:46:19 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:16:05.510 06:46:19 -- host/digest.sh@77 -- # local rw bs qd 00:16:05.510 06:46:19 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:05.510 06:46:19 -- host/digest.sh@80 -- # rw=randwrite 00:16:05.510 06:46:19 -- host/digest.sh@80 -- # bs=131072 00:16:05.510 06:46:19 -- host/digest.sh@80 -- # qd=16 00:16:05.510 06:46:19 -- host/digest.sh@82 -- # bperfpid=71841 00:16:05.510 06:46:19 -- host/digest.sh@83 -- # waitforlisten 71841 /var/tmp/bperf.sock 00:16:05.510 06:46:19 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:05.510 06:46:19 -- common/autotest_common.sh@829 -- # '[' -z 71841 ']' 00:16:05.510 06:46:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:05.510 06:46:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:05.510 06:46:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:05.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:05.510 06:46:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:05.510 06:46:19 -- common/autotest_common.sh@10 -- # set +x 00:16:05.510 [2024-12-14 06:46:19.483283] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:05.510 [2024-12-14 06:46:19.483585] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71841 ] 00:16:05.511 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:05.511 Zero copy mechanism will not be used. 00:16:05.770 [2024-12-14 06:46:19.623474] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.770 [2024-12-14 06:46:19.679482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:05.770 06:46:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:05.770 06:46:19 -- common/autotest_common.sh@862 -- # return 0 00:16:05.770 06:46:19 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:05.770 06:46:19 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:05.770 06:46:19 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:06.338 06:46:20 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:06.338 06:46:20 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:06.338 nvme0n1 00:16:06.338 06:46:20 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:06.338 06:46:20 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:06.597 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:06.597 Zero copy mechanism will not be used. 00:16:06.597 Running I/O for 2 seconds... 00:16:08.530 00:16:08.530 Latency(us) 00:16:08.530 [2024-12-14T06:46:22.522Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.530 [2024-12-14T06:46:22.522Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:08.530 nvme0n1 : 2.00 6630.23 828.78 0.00 0.00 2408.07 1906.50 10724.07 00:16:08.530 [2024-12-14T06:46:22.522Z] =================================================================================================================== 00:16:08.530 [2024-12-14T06:46:22.522Z] Total : 6630.23 828.78 0.00 0.00 2408.07 1906.50 10724.07 00:16:08.530 0 00:16:08.530 06:46:22 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:08.530 06:46:22 -- host/digest.sh@92 -- # get_accel_stats 00:16:08.530 06:46:22 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:08.530 06:46:22 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:08.530 06:46:22 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:08.530 | select(.opcode=="crc32c") 00:16:08.530 | "\(.module_name) \(.executed)"' 00:16:08.789 06:46:22 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:08.789 06:46:22 -- host/digest.sh@93 -- # exp_module=software 00:16:08.789 06:46:22 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:08.789 06:46:22 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:08.789 06:46:22 -- host/digest.sh@97 -- # killprocess 71841 00:16:08.789 06:46:22 -- common/autotest_common.sh@936 -- # '[' -z 71841 ']' 00:16:08.789 06:46:22 -- common/autotest_common.sh@940 -- # kill -0 71841 00:16:08.789 06:46:22 -- common/autotest_common.sh@941 -- # uname 00:16:08.789 06:46:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:08.789 06:46:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71841 00:16:08.789 killing process with pid 71841 00:16:08.789 Received shutdown signal, test time was about 2.000000 seconds 00:16:08.789 00:16:08.789 Latency(us) 00:16:08.789 [2024-12-14T06:46:22.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.789 [2024-12-14T06:46:22.781Z] =================================================================================================================== 00:16:08.789 [2024-12-14T06:46:22.781Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:08.789 06:46:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:08.789 06:46:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:08.789 06:46:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71841' 00:16:08.789 06:46:22 -- common/autotest_common.sh@955 -- # kill 71841 00:16:08.789 06:46:22 -- common/autotest_common.sh@960 -- # wait 71841 00:16:09.048 06:46:22 -- host/digest.sh@126 -- # killprocess 71663 00:16:09.048 06:46:22 -- common/autotest_common.sh@936 -- # '[' -z 71663 ']' 00:16:09.048 06:46:22 -- common/autotest_common.sh@940 -- # kill -0 71663 00:16:09.048 06:46:22 -- common/autotest_common.sh@941 -- # uname 00:16:09.048 06:46:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:09.048 06:46:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71663 00:16:09.048 killing process with pid 71663 00:16:09.048 06:46:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:09.048 06:46:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:09.048 06:46:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71663' 00:16:09.048 06:46:22 -- common/autotest_common.sh@955 -- # kill 71663 00:16:09.048 06:46:22 -- common/autotest_common.sh@960 -- # wait 71663 00:16:09.307 ************************************ 00:16:09.307 END TEST nvmf_digest_clean 00:16:09.307 ************************************ 00:16:09.307 00:16:09.307 real 0m14.710s 00:16:09.307 user 0m28.264s 00:16:09.307 sys 0m4.369s 00:16:09.307 06:46:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:09.307 06:46:23 -- common/autotest_common.sh@10 -- # set +x 00:16:09.307 06:46:23 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:16:09.307 06:46:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:09.307 06:46:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:09.307 06:46:23 -- common/autotest_common.sh@10 -- # set +x 00:16:09.307 ************************************ 00:16:09.307 START TEST nvmf_digest_error 00:16:09.307 ************************************ 00:16:09.307 06:46:23 -- common/autotest_common.sh@1114 -- # run_digest_error 00:16:09.307 06:46:23 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:16:09.307 06:46:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:09.307 06:46:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:09.307 06:46:23 -- common/autotest_common.sh@10 -- # set +x 00:16:09.307 06:46:23 -- nvmf/common.sh@469 -- # nvmfpid=71917 00:16:09.307 06:46:23 -- nvmf/common.sh@470 -- # waitforlisten 71917 00:16:09.307 06:46:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:09.307 06:46:23 -- common/autotest_common.sh@829 -- # '[' -z 71917 ']' 00:16:09.308 06:46:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.308 06:46:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:09.308 06:46:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.308 06:46:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:09.308 06:46:23 -- common/autotest_common.sh@10 -- # set +x 00:16:09.308 [2024-12-14 06:46:23.265623] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:09.308 [2024-12-14 06:46:23.265919] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:09.567 [2024-12-14 06:46:23.411866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.567 [2024-12-14 06:46:23.463886] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:09.567 [2024-12-14 06:46:23.464107] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:09.567 [2024-12-14 06:46:23.464122] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:09.567 [2024-12-14 06:46:23.464131] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:09.567 [2024-12-14 06:46:23.464162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.567 06:46:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:09.567 06:46:23 -- common/autotest_common.sh@862 -- # return 0 00:16:09.567 06:46:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:09.567 06:46:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:09.567 06:46:23 -- common/autotest_common.sh@10 -- # set +x 00:16:09.567 06:46:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.567 06:46:23 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:16:09.567 06:46:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.567 06:46:23 -- common/autotest_common.sh@10 -- # set +x 00:16:09.567 [2024-12-14 06:46:23.536613] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:16:09.567 06:46:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.567 06:46:23 -- host/digest.sh@104 -- # common_target_config 00:16:09.567 06:46:23 -- host/digest.sh@43 -- # rpc_cmd 00:16:09.567 06:46:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.567 06:46:23 -- common/autotest_common.sh@10 -- # set +x 00:16:09.826 null0 00:16:09.826 [2024-12-14 06:46:23.607731] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:09.826 [2024-12-14 06:46:23.631837] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:09.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:09.826 06:46:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.826 06:46:23 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:16:09.826 06:46:23 -- host/digest.sh@54 -- # local rw bs qd 00:16:09.826 06:46:23 -- host/digest.sh@56 -- # rw=randread 00:16:09.826 06:46:23 -- host/digest.sh@56 -- # bs=4096 00:16:09.826 06:46:23 -- host/digest.sh@56 -- # qd=128 00:16:09.826 06:46:23 -- host/digest.sh@58 -- # bperfpid=71936 00:16:09.826 06:46:23 -- host/digest.sh@60 -- # waitforlisten 71936 /var/tmp/bperf.sock 00:16:09.826 06:46:23 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:16:09.826 06:46:23 -- common/autotest_common.sh@829 -- # '[' -z 71936 ']' 00:16:09.826 06:46:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:09.826 06:46:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:09.826 06:46:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:09.826 06:46:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:09.826 06:46:23 -- common/autotest_common.sh@10 -- # set +x 00:16:09.826 [2024-12-14 06:46:23.694085] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:09.826 [2024-12-14 06:46:23.694335] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71936 ] 00:16:10.085 [2024-12-14 06:46:23.833063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.085 [2024-12-14 06:46:23.889141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.019 06:46:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:11.019 06:46:24 -- common/autotest_common.sh@862 -- # return 0 00:16:11.019 06:46:24 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:11.019 06:46:24 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:11.019 06:46:24 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:11.019 06:46:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.019 06:46:24 -- common/autotest_common.sh@10 -- # set +x 00:16:11.019 06:46:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.019 06:46:24 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:11.019 06:46:24 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:11.280 nvme0n1 00:16:11.280 06:46:25 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:11.280 06:46:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.280 06:46:25 -- common/autotest_common.sh@10 -- # set +x 00:16:11.280 06:46:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.280 06:46:25 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:11.280 06:46:25 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:11.540 Running I/O for 2 seconds... 00:16:11.540 [2024-12-14 06:46:25.412766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:11.540 [2024-12-14 06:46:25.412817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.540 [2024-12-14 06:46:25.412847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:11.540 [2024-12-14 06:46:25.428372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:11.540 [2024-12-14 06:46:25.428411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.540 [2024-12-14 06:46:25.428441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:11.540 [2024-12-14 06:46:25.444452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:11.540 [2024-12-14 06:46:25.444489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.540 [2024-12-14 06:46:25.444517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:11.540 [2024-12-14 06:46:25.460187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:11.540 [2024-12-14 06:46:25.460222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.540 [2024-12-14 06:46:25.460251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:11.540 [2024-12-14 06:46:25.475579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:11.540 [2024-12-14 06:46:25.475780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.540 [2024-12-14 06:46:25.475813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:11.540 [2024-12-14 06:46:25.491099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:11.540 [2024-12-14 06:46:25.491138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.540 [2024-12-14 06:46:25.491169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:11.540 [2024-12-14 06:46:25.506282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:11.540 [2024-12-14 06:46:25.506318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.540 [2024-12-14 06:46:25.506347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:11.540 [2024-12-14 06:46:25.521725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:11.540 [2024-12-14 06:46:25.521941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.540 [2024-12-14 06:46:25.521976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:11.799 [2024-12-14 06:46:25.538046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:11.799 [2024-12-14 06:46:25.538084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.799 [2024-12-14 06:46:25.538113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:11.799 [2024-12-14 06:46:25.553398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:11.799 [2024-12-14 06:46:25.553587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.799 [2024-12-14 06:46:25.553621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:11.799 [2024-12-14 06:46:25.569273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:11.799 [2024-12-14 06:46:25.569326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.799 [2024-12-14 06:46:25.569355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:11.799 [2024-12-14 06:46:25.584734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:11.799 [2024-12-14 06:46:25.584770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.799 [2024-12-14 06:46:25.584799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:11.799 [2024-12-14 06:46:25.600211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:11.799 [2024-12-14 06:46:25.600247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.799 [2024-12-14 06:46:25.600275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:11.799 [2024-12-14 06:46:25.616484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:11.799 [2024-12-14 06:46:25.616520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.799 [2024-12-14 06:46:25.616548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:11.799 [2024-12-14 06:46:25.632481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:11.799 [2024-12-14 06:46:25.632518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.799 [2024-12-14 06:46:25.632547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:11.799 [2024-12-14 06:46:25.649391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:11.799 [2024-12-14 06:46:25.649428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.799 [2024-12-14 06:46:25.649457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:11.799 [2024-12-14 06:46:25.664868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:11.799 [2024-12-14 06:46:25.664927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.799 [2024-12-14 06:46:25.664956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:11.799 [2024-12-14 06:46:25.680206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:11.799 [2024-12-14 06:46:25.680241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.799 [2024-12-14 06:46:25.680269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:11.799 [2024-12-14 06:46:25.695561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:11.799 [2024-12-14 06:46:25.695758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.799 [2024-12-14 06:46:25.695791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:11.799 [2024-12-14 06:46:25.711205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:11.800 [2024-12-14 06:46:25.711273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.800 [2024-12-14 06:46:25.711302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:11.800 [2024-12-14 06:46:25.728345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:11.800 [2024-12-14 06:46:25.728401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.800 [2024-12-14 06:46:25.728431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:11.800 [2024-12-14 06:46:25.745260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:11.800 [2024-12-14 06:46:25.745299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:89 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.800 [2024-12-14 06:46:25.745344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:11.800 [2024-12-14 06:46:25.761385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:11.800 [2024-12-14 06:46:25.761421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.800 [2024-12-14 06:46:25.761449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:11.800 [2024-12-14 06:46:25.776836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:11.800 [2024-12-14 06:46:25.776872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:11.800 [2024-12-14 06:46:25.776945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.059 [2024-12-14 06:46:25.794181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.059 [2024-12-14 06:46:25.794217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.059 [2024-12-14 06:46:25.794245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.059 [2024-12-14 06:46:25.810863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.059 [2024-12-14 06:46:25.810950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.059 [2024-12-14 06:46:25.810983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.059 [2024-12-14 06:46:25.826366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.059 [2024-12-14 06:46:25.826401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.059 [2024-12-14 06:46:25.826430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.059 [2024-12-14 06:46:25.843021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.059 [2024-12-14 06:46:25.843062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.059 [2024-12-14 06:46:25.843076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.059 [2024-12-14 06:46:25.859809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.059 [2024-12-14 06:46:25.859862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.059 [2024-12-14 06:46:25.859907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.059 [2024-12-14 06:46:25.875924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.059 [2024-12-14 06:46:25.875985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.059 [2024-12-14 06:46:25.876015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.059 [2024-12-14 06:46:25.891455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.059 [2024-12-14 06:46:25.891653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.059 [2024-12-14 06:46:25.891687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.059 [2024-12-14 06:46:25.907069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.059 [2024-12-14 06:46:25.907108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.059 [2024-12-14 06:46:25.907137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.059 [2024-12-14 06:46:25.922451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.059 [2024-12-14 06:46:25.922486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.059 [2024-12-14 06:46:25.922514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.059 [2024-12-14 06:46:25.937882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.059 [2024-12-14 06:46:25.938117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.059 [2024-12-14 06:46:25.938150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.059 [2024-12-14 06:46:25.953655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.059 [2024-12-14 06:46:25.953869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.059 [2024-12-14 06:46:25.954047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.059 [2024-12-14 06:46:25.969747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.059 [2024-12-14 06:46:25.969993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.059 [2024-12-14 06:46:25.970196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.059 [2024-12-14 06:46:25.987504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.059 [2024-12-14 06:46:25.987723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.059 [2024-12-14 06:46:25.987876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.059 [2024-12-14 06:46:26.004936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.059 [2024-12-14 06:46:26.005153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.059 [2024-12-14 06:46:26.005323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.059 [2024-12-14 06:46:26.021802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.059 [2024-12-14 06:46:26.022048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.059 [2024-12-14 06:46:26.022237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.059 [2024-12-14 06:46:26.039600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.059 [2024-12-14 06:46:26.039798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.059 [2024-12-14 06:46:26.040121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.318 [2024-12-14 06:46:26.058594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.319 [2024-12-14 06:46:26.058807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.319 [2024-12-14 06:46:26.059033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.319 [2024-12-14 06:46:26.077245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.319 [2024-12-14 06:46:26.077470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.319 [2024-12-14 06:46:26.077674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.319 [2024-12-14 06:46:26.095780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.319 [2024-12-14 06:46:26.096032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.319 [2024-12-14 06:46:26.096186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.319 [2024-12-14 06:46:26.113914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.319 [2024-12-14 06:46:26.113999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.319 [2024-12-14 06:46:26.114031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.319 [2024-12-14 06:46:26.131533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.319 [2024-12-14 06:46:26.131697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.319 [2024-12-14 06:46:26.131730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.319 [2024-12-14 06:46:26.148848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.319 [2024-12-14 06:46:26.148928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.319 [2024-12-14 06:46:26.148959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.319 [2024-12-14 06:46:26.165199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.319 [2024-12-14 06:46:26.165236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.319 [2024-12-14 06:46:26.165264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.319 [2024-12-14 06:46:26.181312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.319 [2024-12-14 06:46:26.181348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.319 [2024-12-14 06:46:26.181377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.319 [2024-12-14 06:46:26.197449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.319 [2024-12-14 06:46:26.197485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.319 [2024-12-14 06:46:26.197513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.319 [2024-12-14 06:46:26.213465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.319 [2024-12-14 06:46:26.213500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.319 [2024-12-14 06:46:26.213529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.319 [2024-12-14 06:46:26.229490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.319 [2024-12-14 06:46:26.229526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.319 [2024-12-14 06:46:26.229554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.319 [2024-12-14 06:46:26.247188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.319 [2024-12-14 06:46:26.247407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.319 [2024-12-14 06:46:26.247439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.319 [2024-12-14 06:46:26.264469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.319 [2024-12-14 06:46:26.264664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.319 [2024-12-14 06:46:26.264803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.319 [2024-12-14 06:46:26.281213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.319 [2024-12-14 06:46:26.281394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.319 [2024-12-14 06:46:26.281530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.319 [2024-12-14 06:46:26.297556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.319 [2024-12-14 06:46:26.297811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.319 [2024-12-14 06:46:26.297993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.577 [2024-12-14 06:46:26.313931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.578 [2024-12-14 06:46:26.314152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.578 [2024-12-14 06:46:26.314292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.578 [2024-12-14 06:46:26.330165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.578 [2024-12-14 06:46:26.330386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.578 [2024-12-14 06:46:26.330542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.578 [2024-12-14 06:46:26.347395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.578 [2024-12-14 06:46:26.347591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.578 [2024-12-14 06:46:26.347743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.578 [2024-12-14 06:46:26.363949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.578 [2024-12-14 06:46:26.364171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.578 [2024-12-14 06:46:26.364367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.578 [2024-12-14 06:46:26.380210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.578 [2024-12-14 06:46:26.380426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.578 [2024-12-14 06:46:26.380582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.578 [2024-12-14 06:46:26.396311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.578 [2024-12-14 06:46:26.396525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.578 [2024-12-14 06:46:26.396720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.578 [2024-12-14 06:46:26.412295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.578 [2024-12-14 06:46:26.412332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.578 [2024-12-14 06:46:26.412362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.578 [2024-12-14 06:46:26.427634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.578 [2024-12-14 06:46:26.427825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.578 [2024-12-14 06:46:26.427864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.578 [2024-12-14 06:46:26.450266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.578 [2024-12-14 06:46:26.450457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.578 [2024-12-14 06:46:26.450497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.578 [2024-12-14 06:46:26.465884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.578 [2024-12-14 06:46:26.465928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.578 [2024-12-14 06:46:26.465957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.578 [2024-12-14 06:46:26.481213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.578 [2024-12-14 06:46:26.481250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.578 [2024-12-14 06:46:26.481278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.578 [2024-12-14 06:46:26.496391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.578 [2024-12-14 06:46:26.496427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.578 [2024-12-14 06:46:26.496455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.578 [2024-12-14 06:46:26.511680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.578 [2024-12-14 06:46:26.511915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.578 [2024-12-14 06:46:26.511933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.578 [2024-12-14 06:46:26.526983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.578 [2024-12-14 06:46:26.527207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.578 [2024-12-14 06:46:26.527422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.578 [2024-12-14 06:46:26.543647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.578 [2024-12-14 06:46:26.543840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.578 [2024-12-14 06:46:26.544038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.578 [2024-12-14 06:46:26.560157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.578 [2024-12-14 06:46:26.560341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.578 [2024-12-14 06:46:26.560522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.836 [2024-12-14 06:46:26.576802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.837 [2024-12-14 06:46:26.577012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.837 [2024-12-14 06:46:26.577227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.837 [2024-12-14 06:46:26.592833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.837 [2024-12-14 06:46:26.593041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.837 [2024-12-14 06:46:26.593230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.837 [2024-12-14 06:46:26.608588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.837 [2024-12-14 06:46:26.608781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.837 [2024-12-14 06:46:26.608943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.837 [2024-12-14 06:46:26.624642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.837 [2024-12-14 06:46:26.624836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.837 [2024-12-14 06:46:26.625046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.837 [2024-12-14 06:46:26.640703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.837 [2024-12-14 06:46:26.640941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.837 [2024-12-14 06:46:26.641044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.837 [2024-12-14 06:46:26.656752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.837 [2024-12-14 06:46:26.656790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.837 [2024-12-14 06:46:26.656819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.837 [2024-12-14 06:46:26.672164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.837 [2024-12-14 06:46:26.672201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.837 [2024-12-14 06:46:26.672230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.837 [2024-12-14 06:46:26.687500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.837 [2024-12-14 06:46:26.687674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.837 [2024-12-14 06:46:26.687707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.837 [2024-12-14 06:46:26.703113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.837 [2024-12-14 06:46:26.703151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.837 [2024-12-14 06:46:26.703181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.837 [2024-12-14 06:46:26.718553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.837 [2024-12-14 06:46:26.718589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.837 [2024-12-14 06:46:26.718617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.837 [2024-12-14 06:46:26.735084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.837 [2024-12-14 06:46:26.735124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.837 [2024-12-14 06:46:26.735139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.837 [2024-12-14 06:46:26.752714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.837 [2024-12-14 06:46:26.752936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.837 [2024-12-14 06:46:26.752956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.837 [2024-12-14 06:46:26.769694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.837 [2024-12-14 06:46:26.769732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.837 [2024-12-14 06:46:26.769761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.837 [2024-12-14 06:46:26.785484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.837 [2024-12-14 06:46:26.785521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.837 [2024-12-14 06:46:26.785550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.837 [2024-12-14 06:46:26.800805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.837 [2024-12-14 06:46:26.800841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.837 [2024-12-14 06:46:26.800869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:12.837 [2024-12-14 06:46:26.816427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:12.837 [2024-12-14 06:46:26.816462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:12.837 [2024-12-14 06:46:26.816491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.096 [2024-12-14 06:46:26.832269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.096 [2024-12-14 06:46:26.832305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.096 [2024-12-14 06:46:26.832349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.096 [2024-12-14 06:46:26.847949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.096 [2024-12-14 06:46:26.848164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.096 [2024-12-14 06:46:26.848197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.096 [2024-12-14 06:46:26.863776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.096 [2024-12-14 06:46:26.863979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.096 [2024-12-14 06:46:26.864012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.096 [2024-12-14 06:46:26.879587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.096 [2024-12-14 06:46:26.879625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.096 [2024-12-14 06:46:26.879653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.096 [2024-12-14 06:46:26.894996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.096 [2024-12-14 06:46:26.895038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.097 [2024-12-14 06:46:26.895053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.097 [2024-12-14 06:46:26.910765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.097 [2024-12-14 06:46:26.910997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.097 [2024-12-14 06:46:26.911016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.097 [2024-12-14 06:46:26.927371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.097 [2024-12-14 06:46:26.927585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.097 [2024-12-14 06:46:26.927730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.097 [2024-12-14 06:46:26.944629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.097 [2024-12-14 06:46:26.944842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.097 [2024-12-14 06:46:26.945076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.097 [2024-12-14 06:46:26.961083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.097 [2024-12-14 06:46:26.961300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.097 [2024-12-14 06:46:26.961445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.097 [2024-12-14 06:46:26.977119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.097 [2024-12-14 06:46:26.977333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.097 [2024-12-14 06:46:26.977544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.097 [2024-12-14 06:46:26.993228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.097 [2024-12-14 06:46:26.993454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.097 [2024-12-14 06:46:26.993606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.097 [2024-12-14 06:46:27.008941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.097 [2024-12-14 06:46:27.009171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.097 [2024-12-14 06:46:27.009371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.097 [2024-12-14 06:46:27.025109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.097 [2024-12-14 06:46:27.025322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.097 [2024-12-14 06:46:27.025478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.097 [2024-12-14 06:46:27.041152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.097 [2024-12-14 06:46:27.041380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.097 [2024-12-14 06:46:27.041525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.097 [2024-12-14 06:46:27.057426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.097 [2024-12-14 06:46:27.057627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.097 [2024-12-14 06:46:27.057768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.097 [2024-12-14 06:46:27.073327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.097 [2024-12-14 06:46:27.073531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.097 [2024-12-14 06:46:27.073564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.356 [2024-12-14 06:46:27.089206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.356 [2024-12-14 06:46:27.089245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.356 [2024-12-14 06:46:27.089275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.356 [2024-12-14 06:46:27.104607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.356 [2024-12-14 06:46:27.104642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.356 [2024-12-14 06:46:27.104670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.356 [2024-12-14 06:46:27.120735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.356 [2024-12-14 06:46:27.120771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.356 [2024-12-14 06:46:27.120805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.356 [2024-12-14 06:46:27.138001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.356 [2024-12-14 06:46:27.138049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.356 [2024-12-14 06:46:27.138079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.356 [2024-12-14 06:46:27.156004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.356 [2024-12-14 06:46:27.156046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.356 [2024-12-14 06:46:27.156060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.356 [2024-12-14 06:46:27.174275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.356 [2024-12-14 06:46:27.174315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.356 [2024-12-14 06:46:27.174330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.356 [2024-12-14 06:46:27.192214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.356 [2024-12-14 06:46:27.192272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.356 [2024-12-14 06:46:27.192302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.356 [2024-12-14 06:46:27.209685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.356 [2024-12-14 06:46:27.209723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.356 [2024-12-14 06:46:27.209753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.356 [2024-12-14 06:46:27.227489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.356 [2024-12-14 06:46:27.227657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.356 [2024-12-14 06:46:27.227690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.356 [2024-12-14 06:46:27.244574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.356 [2024-12-14 06:46:27.244630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.356 [2024-12-14 06:46:27.244660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.356 [2024-12-14 06:46:27.262845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.356 [2024-12-14 06:46:27.262931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.356 [2024-12-14 06:46:27.262947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.356 [2024-12-14 06:46:27.280713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.356 [2024-12-14 06:46:27.280750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.356 [2024-12-14 06:46:27.280780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.356 [2024-12-14 06:46:27.297386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.356 [2024-12-14 06:46:27.297423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.356 [2024-12-14 06:46:27.297452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.356 [2024-12-14 06:46:27.313884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.356 [2024-12-14 06:46:27.313963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.356 [2024-12-14 06:46:27.313994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.356 [2024-12-14 06:46:27.330409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.356 [2024-12-14 06:46:27.330444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.356 [2024-12-14 06:46:27.330473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.615 [2024-12-14 06:46:27.346816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.615 [2024-12-14 06:46:27.346854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.615 [2024-12-14 06:46:27.346883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.615 [2024-12-14 06:46:27.363265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.615 [2024-12-14 06:46:27.363496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.615 [2024-12-14 06:46:27.363528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.615 [2024-12-14 06:46:27.379203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.615 [2024-12-14 06:46:27.379460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.615 [2024-12-14 06:46:27.379601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.615 [2024-12-14 06:46:27.395027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1732d40) 00:16:13.615 [2024-12-14 06:46:27.395210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:13.615 [2024-12-14 06:46:27.395412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:13.615 00:16:13.615 Latency(us) 00:16:13.615 [2024-12-14T06:46:27.607Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.615 [2024-12-14T06:46:27.607Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:13.615 nvme0n1 : 2.01 15511.70 60.59 0.00 0.00 8244.46 7238.75 30027.40 00:16:13.615 [2024-12-14T06:46:27.607Z] =================================================================================================================== 00:16:13.615 [2024-12-14T06:46:27.607Z] Total : 15511.70 60.59 0.00 0.00 8244.46 7238.75 30027.40 00:16:13.615 0 00:16:13.615 06:46:27 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:13.615 06:46:27 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:13.615 06:46:27 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:13.615 | .driver_specific 00:16:13.615 | .nvme_error 00:16:13.615 | .status_code 00:16:13.615 | .command_transient_transport_error' 00:16:13.615 06:46:27 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:13.874 06:46:27 -- host/digest.sh@71 -- # (( 122 > 0 )) 00:16:13.874 06:46:27 -- host/digest.sh@73 -- # killprocess 71936 00:16:13.874 06:46:27 -- common/autotest_common.sh@936 -- # '[' -z 71936 ']' 00:16:13.874 06:46:27 -- common/autotest_common.sh@940 -- # kill -0 71936 00:16:13.874 06:46:27 -- common/autotest_common.sh@941 -- # uname 00:16:13.874 06:46:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:13.874 06:46:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71936 00:16:13.874 killing process with pid 71936 00:16:13.874 Received shutdown signal, test time was about 2.000000 seconds 00:16:13.874 00:16:13.874 Latency(us) 00:16:13.874 [2024-12-14T06:46:27.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.874 [2024-12-14T06:46:27.866Z] =================================================================================================================== 00:16:13.874 [2024-12-14T06:46:27.866Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:13.874 06:46:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:13.874 06:46:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:13.874 06:46:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71936' 00:16:13.874 06:46:27 -- common/autotest_common.sh@955 -- # kill 71936 00:16:13.874 06:46:27 -- common/autotest_common.sh@960 -- # wait 71936 00:16:14.133 06:46:27 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:16:14.133 06:46:27 -- host/digest.sh@54 -- # local rw bs qd 00:16:14.133 06:46:27 -- host/digest.sh@56 -- # rw=randread 00:16:14.133 06:46:27 -- host/digest.sh@56 -- # bs=131072 00:16:14.133 06:46:27 -- host/digest.sh@56 -- # qd=16 00:16:14.133 06:46:27 -- host/digest.sh@58 -- # bperfpid=71996 00:16:14.133 06:46:27 -- host/digest.sh@60 -- # waitforlisten 71996 /var/tmp/bperf.sock 00:16:14.133 06:46:27 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:16:14.133 06:46:27 -- common/autotest_common.sh@829 -- # '[' -z 71996 ']' 00:16:14.133 06:46:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:14.133 06:46:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:14.133 06:46:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:14.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:14.133 06:46:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:14.133 06:46:27 -- common/autotest_common.sh@10 -- # set +x 00:16:14.133 [2024-12-14 06:46:27.994133] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:14.133 [2024-12-14 06:46:27.994285] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71996 ] 00:16:14.133 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:14.133 Zero copy mechanism will not be used. 00:16:14.391 [2024-12-14 06:46:28.142524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.391 [2024-12-14 06:46:28.197426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.327 06:46:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:15.327 06:46:28 -- common/autotest_common.sh@862 -- # return 0 00:16:15.327 06:46:28 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:15.327 06:46:28 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:15.327 06:46:29 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:15.327 06:46:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.327 06:46:29 -- common/autotest_common.sh@10 -- # set +x 00:16:15.327 06:46:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.327 06:46:29 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:15.327 06:46:29 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:15.586 nvme0n1 00:16:15.586 06:46:29 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:15.586 06:46:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.586 06:46:29 -- common/autotest_common.sh@10 -- # set +x 00:16:15.586 06:46:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.586 06:46:29 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:15.586 06:46:29 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:15.846 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:15.846 Zero copy mechanism will not be used. 00:16:15.846 Running I/O for 2 seconds... 00:16:15.846 [2024-12-14 06:46:29.600306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.846 [2024-12-14 06:46:29.600377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.846 [2024-12-14 06:46:29.600408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:15.846 [2024-12-14 06:46:29.604745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.846 [2024-12-14 06:46:29.604785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.846 [2024-12-14 06:46:29.604814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:15.846 [2024-12-14 06:46:29.609193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.846 [2024-12-14 06:46:29.609249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.846 [2024-12-14 06:46:29.609277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:15.846 [2024-12-14 06:46:29.613409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.846 [2024-12-14 06:46:29.613447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.846 [2024-12-14 06:46:29.613476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.846 [2024-12-14 06:46:29.617591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.846 [2024-12-14 06:46:29.617633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.846 [2024-12-14 06:46:29.617647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:15.846 [2024-12-14 06:46:29.621873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.846 [2024-12-14 06:46:29.621920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.846 [2024-12-14 06:46:29.621948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:15.846 [2024-12-14 06:46:29.626252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.846 [2024-12-14 06:46:29.626291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.846 [2024-12-14 06:46:29.626320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:15.846 [2024-12-14 06:46:29.630776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.846 [2024-12-14 06:46:29.630814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.847 [2024-12-14 06:46:29.630843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.847 [2024-12-14 06:46:29.635072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.847 [2024-12-14 06:46:29.635110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.847 [2024-12-14 06:46:29.635138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:15.847 [2024-12-14 06:46:29.639598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.847 [2024-12-14 06:46:29.639651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.847 [2024-12-14 06:46:29.639680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:15.847 [2024-12-14 06:46:29.643851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.847 [2024-12-14 06:46:29.643929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.847 [2024-12-14 06:46:29.643958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:15.847 [2024-12-14 06:46:29.648136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.847 [2024-12-14 06:46:29.648173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.847 [2024-12-14 06:46:29.648201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.847 [2024-12-14 06:46:29.652328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.847 [2024-12-14 06:46:29.652380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.847 [2024-12-14 06:46:29.652409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:15.847 [2024-12-14 06:46:29.656539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.847 [2024-12-14 06:46:29.656590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.847 [2024-12-14 06:46:29.656618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:15.847 [2024-12-14 06:46:29.660686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.847 [2024-12-14 06:46:29.660738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.847 [2024-12-14 06:46:29.660767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:15.847 [2024-12-14 06:46:29.664941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.847 [2024-12-14 06:46:29.665010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.847 [2024-12-14 06:46:29.665038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.847 [2024-12-14 06:46:29.669144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.847 [2024-12-14 06:46:29.669199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.847 [2024-12-14 06:46:29.669227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:15.847 [2024-12-14 06:46:29.673351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.847 [2024-12-14 06:46:29.673404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.847 [2024-12-14 06:46:29.673432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:15.847 [2024-12-14 06:46:29.677455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.847 [2024-12-14 06:46:29.677507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.847 [2024-12-14 06:46:29.677536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:15.847 [2024-12-14 06:46:29.681660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.847 [2024-12-14 06:46:29.681712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.847 [2024-12-14 06:46:29.681740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.847 [2024-12-14 06:46:29.685868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.847 [2024-12-14 06:46:29.685962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.847 [2024-12-14 06:46:29.686003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:15.847 [2024-12-14 06:46:29.690111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.847 [2024-12-14 06:46:29.690163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.847 [2024-12-14 06:46:29.690191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:15.847 [2024-12-14 06:46:29.694256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.847 [2024-12-14 06:46:29.694308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.847 [2024-12-14 06:46:29.694336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:15.847 [2024-12-14 06:46:29.698381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.847 [2024-12-14 06:46:29.698448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.847 [2024-12-14 06:46:29.698476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.847 [2024-12-14 06:46:29.702653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.847 [2024-12-14 06:46:29.702706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.847 [2024-12-14 06:46:29.702734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:15.847 [2024-12-14 06:46:29.706674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.847 [2024-12-14 06:46:29.706725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.847 [2024-12-14 06:46:29.706753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:15.847 [2024-12-14 06:46:29.710951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.847 [2024-12-14 06:46:29.710989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.847 [2024-12-14 06:46:29.711002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:15.847 [2024-12-14 06:46:29.715209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.847 [2024-12-14 06:46:29.715279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.847 [2024-12-14 06:46:29.715308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.847 [2024-12-14 06:46:29.719545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.847 [2024-12-14 06:46:29.719596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.847 [2024-12-14 06:46:29.719624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:15.847 [2024-12-14 06:46:29.723711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.847 [2024-12-14 06:46:29.723763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.847 [2024-12-14 06:46:29.723790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:15.847 [2024-12-14 06:46:29.727938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.847 [2024-12-14 06:46:29.728000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.847 [2024-12-14 06:46:29.728028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:15.847 [2024-12-14 06:46:29.731956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.847 [2024-12-14 06:46:29.732006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.847 [2024-12-14 06:46:29.732033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.847 [2024-12-14 06:46:29.736009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.847 [2024-12-14 06:46:29.736060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.847 [2024-12-14 06:46:29.736088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:15.847 [2024-12-14 06:46:29.740127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.847 [2024-12-14 06:46:29.740179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.847 [2024-12-14 06:46:29.740208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:15.847 [2024-12-14 06:46:29.744673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.847 [2024-12-14 06:46:29.744747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.847 [2024-12-14 06:46:29.744777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:15.848 [2024-12-14 06:46:29.749152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.848 [2024-12-14 06:46:29.749219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.848 [2024-12-14 06:46:29.749249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.848 [2024-12-14 06:46:29.753446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.848 [2024-12-14 06:46:29.753510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.848 [2024-12-14 06:46:29.753539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:15.848 [2024-12-14 06:46:29.757588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.848 [2024-12-14 06:46:29.757641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.848 [2024-12-14 06:46:29.757670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:15.848 [2024-12-14 06:46:29.761787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.848 [2024-12-14 06:46:29.761839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.848 [2024-12-14 06:46:29.761866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:15.848 [2024-12-14 06:46:29.765999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.848 [2024-12-14 06:46:29.766049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.848 [2024-12-14 06:46:29.766077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.848 [2024-12-14 06:46:29.770082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.848 [2024-12-14 06:46:29.770134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.848 [2024-12-14 06:46:29.770161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:15.848 [2024-12-14 06:46:29.774159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.848 [2024-12-14 06:46:29.774211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.848 [2024-12-14 06:46:29.774238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:15.848 [2024-12-14 06:46:29.778277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.848 [2024-12-14 06:46:29.778328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.848 [2024-12-14 06:46:29.778355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:15.848 [2024-12-14 06:46:29.782392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.848 [2024-12-14 06:46:29.782445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.848 [2024-12-14 06:46:29.782472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.848 [2024-12-14 06:46:29.786418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.848 [2024-12-14 06:46:29.786469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.848 [2024-12-14 06:46:29.786498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:15.848 [2024-12-14 06:46:29.790590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.848 [2024-12-14 06:46:29.790657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.848 [2024-12-14 06:46:29.790686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:15.848 [2024-12-14 06:46:29.795069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.848 [2024-12-14 06:46:29.795110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.848 [2024-12-14 06:46:29.795124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:15.848 [2024-12-14 06:46:29.799511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.848 [2024-12-14 06:46:29.799562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.848 [2024-12-14 06:46:29.799590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.848 [2024-12-14 06:46:29.804053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.848 [2024-12-14 06:46:29.804108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.848 [2024-12-14 06:46:29.804138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:15.848 [2024-12-14 06:46:29.808554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.848 [2024-12-14 06:46:29.808606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.848 [2024-12-14 06:46:29.808634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:15.848 [2024-12-14 06:46:29.813085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.848 [2024-12-14 06:46:29.813141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.848 [2024-12-14 06:46:29.813170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:15.848 [2024-12-14 06:46:29.817432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.848 [2024-12-14 06:46:29.817483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.848 [2024-12-14 06:46:29.817510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:15.848 [2024-12-14 06:46:29.821855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.848 [2024-12-14 06:46:29.821919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.848 [2024-12-14 06:46:29.821948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:15.848 [2024-12-14 06:46:29.826221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.848 [2024-12-14 06:46:29.826275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.848 [2024-12-14 06:46:29.826304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:15.848 [2024-12-14 06:46:29.830626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.848 [2024-12-14 06:46:29.830678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.848 [2024-12-14 06:46:29.830706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:15.848 [2024-12-14 06:46:29.835293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:15.848 [2024-12-14 06:46:29.835359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:15.848 [2024-12-14 06:46:29.835388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.109 [2024-12-14 06:46:29.839603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.109 [2024-12-14 06:46:29.839654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.109 [2024-12-14 06:46:29.839682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.109 [2024-12-14 06:46:29.844098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.109 [2024-12-14 06:46:29.844158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.109 [2024-12-14 06:46:29.844186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.109 [2024-12-14 06:46:29.848308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.109 [2024-12-14 06:46:29.848380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.109 [2024-12-14 06:46:29.848408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.109 [2024-12-14 06:46:29.852787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.109 [2024-12-14 06:46:29.852844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.109 [2024-12-14 06:46:29.852872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.109 [2024-12-14 06:46:29.856970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.109 [2024-12-14 06:46:29.857025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.109 [2024-12-14 06:46:29.857053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.109 [2024-12-14 06:46:29.861120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.109 [2024-12-14 06:46:29.861171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.109 [2024-12-14 06:46:29.861199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.109 [2024-12-14 06:46:29.865198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.109 [2024-12-14 06:46:29.865266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.109 [2024-12-14 06:46:29.865294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.109 [2024-12-14 06:46:29.869403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.109 [2024-12-14 06:46:29.869454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.109 [2024-12-14 06:46:29.869481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.109 [2024-12-14 06:46:29.873570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.109 [2024-12-14 06:46:29.873620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.109 [2024-12-14 06:46:29.873648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.109 [2024-12-14 06:46:29.877751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.109 [2024-12-14 06:46:29.877802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.109 [2024-12-14 06:46:29.877831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.109 [2024-12-14 06:46:29.881929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.109 [2024-12-14 06:46:29.881979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.109 [2024-12-14 06:46:29.882006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.109 [2024-12-14 06:46:29.885947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.109 [2024-12-14 06:46:29.885997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.109 [2024-12-14 06:46:29.886025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.109 [2024-12-14 06:46:29.889960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.109 [2024-12-14 06:46:29.890010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.109 [2024-12-14 06:46:29.890037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.109 [2024-12-14 06:46:29.894055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.109 [2024-12-14 06:46:29.894108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.109 [2024-12-14 06:46:29.894137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.109 [2024-12-14 06:46:29.898119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.109 [2024-12-14 06:46:29.898171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.109 [2024-12-14 06:46:29.898198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.109 [2024-12-14 06:46:29.902138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.109 [2024-12-14 06:46:29.902189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.109 [2024-12-14 06:46:29.902217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.109 [2024-12-14 06:46:29.906195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.109 [2024-12-14 06:46:29.906245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.109 [2024-12-14 06:46:29.906274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.109 [2024-12-14 06:46:29.910278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.109 [2024-12-14 06:46:29.910329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.109 [2024-12-14 06:46:29.910356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.109 [2024-12-14 06:46:29.914287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.109 [2024-12-14 06:46:29.914339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.109 [2024-12-14 06:46:29.914366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.109 [2024-12-14 06:46:29.918440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.109 [2024-12-14 06:46:29.918491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.110 [2024-12-14 06:46:29.918519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.110 [2024-12-14 06:46:29.922598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.110 [2024-12-14 06:46:29.922650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.110 [2024-12-14 06:46:29.922677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.110 [2024-12-14 06:46:29.926652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.110 [2024-12-14 06:46:29.926704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.110 [2024-12-14 06:46:29.926732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.110 [2024-12-14 06:46:29.930679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.110 [2024-12-14 06:46:29.930730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.110 [2024-12-14 06:46:29.930757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.110 [2024-12-14 06:46:29.934611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.110 [2024-12-14 06:46:29.934662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.110 [2024-12-14 06:46:29.934690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.110 [2024-12-14 06:46:29.938662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.110 [2024-12-14 06:46:29.938714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.110 [2024-12-14 06:46:29.938742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.110 [2024-12-14 06:46:29.942642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.110 [2024-12-14 06:46:29.942694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.110 [2024-12-14 06:46:29.942722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.110 [2024-12-14 06:46:29.946775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.110 [2024-12-14 06:46:29.946827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.110 [2024-12-14 06:46:29.946855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.110 [2024-12-14 06:46:29.950810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.110 [2024-12-14 06:46:29.950862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.110 [2024-12-14 06:46:29.950890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.110 [2024-12-14 06:46:29.954825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.110 [2024-12-14 06:46:29.954877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.110 [2024-12-14 06:46:29.954941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.110 [2024-12-14 06:46:29.958765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.110 [2024-12-14 06:46:29.958816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.110 [2024-12-14 06:46:29.958843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.110 [2024-12-14 06:46:29.963043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.110 [2024-12-14 06:46:29.963081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.110 [2024-12-14 06:46:29.963093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.110 [2024-12-14 06:46:29.967107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.110 [2024-12-14 06:46:29.967144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.110 [2024-12-14 06:46:29.967174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.110 [2024-12-14 06:46:29.971287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.110 [2024-12-14 06:46:29.971354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.110 [2024-12-14 06:46:29.971382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.110 [2024-12-14 06:46:29.975412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.110 [2024-12-14 06:46:29.975481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.110 [2024-12-14 06:46:29.975509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.110 [2024-12-14 06:46:29.979527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.110 [2024-12-14 06:46:29.979578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.110 [2024-12-14 06:46:29.979606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.110 [2024-12-14 06:46:29.983641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.110 [2024-12-14 06:46:29.983694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.110 [2024-12-14 06:46:29.983721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.110 [2024-12-14 06:46:29.987684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.110 [2024-12-14 06:46:29.987736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.110 [2024-12-14 06:46:29.987764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.110 [2024-12-14 06:46:29.991694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.110 [2024-12-14 06:46:29.991745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.110 [2024-12-14 06:46:29.991773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.110 [2024-12-14 06:46:29.995639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.110 [2024-12-14 06:46:29.995689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.110 [2024-12-14 06:46:29.995716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.110 [2024-12-14 06:46:29.999786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.110 [2024-12-14 06:46:29.999838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.110 [2024-12-14 06:46:29.999865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.110 [2024-12-14 06:46:30.004412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.110 [2024-12-14 06:46:30.004468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.110 [2024-12-14 06:46:30.004497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.110 [2024-12-14 06:46:30.008769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.110 [2024-12-14 06:46:30.008825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.110 [2024-12-14 06:46:30.008853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.110 [2024-12-14 06:46:30.013124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.110 [2024-12-14 06:46:30.013178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.110 [2024-12-14 06:46:30.013207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.110 [2024-12-14 06:46:30.017431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.110 [2024-12-14 06:46:30.017484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.110 [2024-12-14 06:46:30.017512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.110 [2024-12-14 06:46:30.022785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.110 [2024-12-14 06:46:30.022839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.110 [2024-12-14 06:46:30.022864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.110 [2024-12-14 06:46:30.027127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.110 [2024-12-14 06:46:30.027168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.111 [2024-12-14 06:46:30.027181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.111 [2024-12-14 06:46:30.031460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.111 [2024-12-14 06:46:30.031523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.111 [2024-12-14 06:46:30.031551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.111 [2024-12-14 06:46:30.035658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.111 [2024-12-14 06:46:30.035710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.111 [2024-12-14 06:46:30.035738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.111 [2024-12-14 06:46:30.039814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.111 [2024-12-14 06:46:30.039866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.111 [2024-12-14 06:46:30.039906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.111 [2024-12-14 06:46:30.044005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.111 [2024-12-14 06:46:30.044056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.111 [2024-12-14 06:46:30.044084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.111 [2024-12-14 06:46:30.048029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.111 [2024-12-14 06:46:30.048080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.111 [2024-12-14 06:46:30.048107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.111 [2024-12-14 06:46:30.052435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.111 [2024-12-14 06:46:30.052486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.111 [2024-12-14 06:46:30.052514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.111 [2024-12-14 06:46:30.056878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.111 [2024-12-14 06:46:30.056940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.111 [2024-12-14 06:46:30.056968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.111 [2024-12-14 06:46:30.061001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.111 [2024-12-14 06:46:30.061052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.111 [2024-12-14 06:46:30.061079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.111 [2024-12-14 06:46:30.065014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.111 [2024-12-14 06:46:30.065064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.111 [2024-12-14 06:46:30.065092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.111 [2024-12-14 06:46:30.069183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.111 [2024-12-14 06:46:30.069234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.111 [2024-12-14 06:46:30.069262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.111 [2024-12-14 06:46:30.073348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.111 [2024-12-14 06:46:30.073406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.111 [2024-12-14 06:46:30.073434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.111 [2024-12-14 06:46:30.077467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.111 [2024-12-14 06:46:30.077522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.111 [2024-12-14 06:46:30.077550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.111 [2024-12-14 06:46:30.081685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.111 [2024-12-14 06:46:30.081737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.111 [2024-12-14 06:46:30.081765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.111 [2024-12-14 06:46:30.085900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.111 [2024-12-14 06:46:30.085950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.111 [2024-12-14 06:46:30.085977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.111 [2024-12-14 06:46:30.090067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.111 [2024-12-14 06:46:30.090118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.111 [2024-12-14 06:46:30.090145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.111 [2024-12-14 06:46:30.094171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.111 [2024-12-14 06:46:30.094223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.111 [2024-12-14 06:46:30.094252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.372 [2024-12-14 06:46:30.098594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.372 [2024-12-14 06:46:30.098646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.372 [2024-12-14 06:46:30.098675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.372 [2024-12-14 06:46:30.102798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.372 [2024-12-14 06:46:30.102851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.372 [2024-12-14 06:46:30.102878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.372 [2024-12-14 06:46:30.107368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.372 [2024-12-14 06:46:30.107419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.372 [2024-12-14 06:46:30.107447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.372 [2024-12-14 06:46:30.111435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.372 [2024-12-14 06:46:30.111485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.372 [2024-12-14 06:46:30.111513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.372 [2024-12-14 06:46:30.115460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.372 [2024-12-14 06:46:30.115512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.372 [2024-12-14 06:46:30.115539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.372 [2024-12-14 06:46:30.119634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.372 [2024-12-14 06:46:30.119685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.372 [2024-12-14 06:46:30.119713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.372 [2024-12-14 06:46:30.123844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.372 [2024-12-14 06:46:30.123922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.372 [2024-12-14 06:46:30.123936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.372 [2024-12-14 06:46:30.127937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.372 [2024-12-14 06:46:30.128017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.372 [2024-12-14 06:46:30.128045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.372 [2024-12-14 06:46:30.131975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.372 [2024-12-14 06:46:30.132025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.372 [2024-12-14 06:46:30.132053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.372 [2024-12-14 06:46:30.136031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.372 [2024-12-14 06:46:30.136082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.372 [2024-12-14 06:46:30.136109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.372 [2024-12-14 06:46:30.140072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.372 [2024-12-14 06:46:30.140123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.372 [2024-12-14 06:46:30.140151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.372 [2024-12-14 06:46:30.144099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.372 [2024-12-14 06:46:30.144150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.372 [2024-12-14 06:46:30.144177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.372 [2024-12-14 06:46:30.148128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.372 [2024-12-14 06:46:30.148179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.372 [2024-12-14 06:46:30.148207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.372 [2024-12-14 06:46:30.152177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.372 [2024-12-14 06:46:30.152228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.372 [2024-12-14 06:46:30.152256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.372 [2024-12-14 06:46:30.156174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.372 [2024-12-14 06:46:30.156225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.372 [2024-12-14 06:46:30.156252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.372 [2024-12-14 06:46:30.160103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.372 [2024-12-14 06:46:30.160154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.372 [2024-12-14 06:46:30.160182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.372 [2024-12-14 06:46:30.164145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.372 [2024-12-14 06:46:30.164196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.372 [2024-12-14 06:46:30.164224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.372 [2024-12-14 06:46:30.168222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.372 [2024-12-14 06:46:30.168274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.372 [2024-12-14 06:46:30.168302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.372 [2024-12-14 06:46:30.172344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.372 [2024-12-14 06:46:30.172396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.372 [2024-12-14 06:46:30.172423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.372 [2024-12-14 06:46:30.176394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.372 [2024-12-14 06:46:30.176445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.372 [2024-12-14 06:46:30.176472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.372 [2024-12-14 06:46:30.180504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.372 [2024-12-14 06:46:30.180556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.372 [2024-12-14 06:46:30.180584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.372 [2024-12-14 06:46:30.184630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.372 [2024-12-14 06:46:30.184684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.372 [2024-12-14 06:46:30.184711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.372 [2024-12-14 06:46:30.188673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.372 [2024-12-14 06:46:30.188725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.372 [2024-12-14 06:46:30.188753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.372 [2024-12-14 06:46:30.192862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.372 [2024-12-14 06:46:30.192923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.372 [2024-12-14 06:46:30.192951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.373 [2024-12-14 06:46:30.196967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.373 [2024-12-14 06:46:30.197018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.373 [2024-12-14 06:46:30.197046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.373 [2024-12-14 06:46:30.201176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.373 [2024-12-14 06:46:30.201228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.373 [2024-12-14 06:46:30.201257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.373 [2024-12-14 06:46:30.205438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.373 [2024-12-14 06:46:30.205490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.373 [2024-12-14 06:46:30.205518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.373 [2024-12-14 06:46:30.209647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.373 [2024-12-14 06:46:30.209701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.373 [2024-12-14 06:46:30.209730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.373 [2024-12-14 06:46:30.213692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.373 [2024-12-14 06:46:30.213744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.373 [2024-12-14 06:46:30.213771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.373 [2024-12-14 06:46:30.217942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.373 [2024-12-14 06:46:30.217993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.373 [2024-12-14 06:46:30.218021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.373 [2024-12-14 06:46:30.222064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.373 [2024-12-14 06:46:30.222132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.373 [2024-12-14 06:46:30.222160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.373 [2024-12-14 06:46:30.226186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.373 [2024-12-14 06:46:30.226238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.373 [2024-12-14 06:46:30.226265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.373 [2024-12-14 06:46:30.230188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.373 [2024-12-14 06:46:30.230240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.373 [2024-12-14 06:46:30.230267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.373 [2024-12-14 06:46:30.234219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.373 [2024-12-14 06:46:30.234270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.373 [2024-12-14 06:46:30.234297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.373 [2024-12-14 06:46:30.238271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.373 [2024-12-14 06:46:30.238339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.373 [2024-12-14 06:46:30.238366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.373 [2024-12-14 06:46:30.242284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.373 [2024-12-14 06:46:30.242335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.373 [2024-12-14 06:46:30.242362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.373 [2024-12-14 06:46:30.246325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.373 [2024-12-14 06:46:30.246377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.373 [2024-12-14 06:46:30.246405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.373 [2024-12-14 06:46:30.250324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.373 [2024-12-14 06:46:30.250392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.373 [2024-12-14 06:46:30.250419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.373 [2024-12-14 06:46:30.254380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.373 [2024-12-14 06:46:30.254437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.373 [2024-12-14 06:46:30.254465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.373 [2024-12-14 06:46:30.258432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.373 [2024-12-14 06:46:30.258483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.373 [2024-12-14 06:46:30.258510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.373 [2024-12-14 06:46:30.262542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.373 [2024-12-14 06:46:30.262594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.373 [2024-12-14 06:46:30.262621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.373 [2024-12-14 06:46:30.266613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.373 [2024-12-14 06:46:30.266665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.373 [2024-12-14 06:46:30.266693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.373 [2024-12-14 06:46:30.270716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.373 [2024-12-14 06:46:30.270768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.373 [2024-12-14 06:46:30.270795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.373 [2024-12-14 06:46:30.274808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.373 [2024-12-14 06:46:30.274860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.373 [2024-12-14 06:46:30.274888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.373 [2024-12-14 06:46:30.278845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.373 [2024-12-14 06:46:30.278942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.373 [2024-12-14 06:46:30.278972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.373 [2024-12-14 06:46:30.282776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.373 [2024-12-14 06:46:30.282828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.373 [2024-12-14 06:46:30.282855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.373 [2024-12-14 06:46:30.286872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.373 [2024-12-14 06:46:30.286957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.373 [2024-12-14 06:46:30.286969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.373 [2024-12-14 06:46:30.290977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.373 [2024-12-14 06:46:30.291012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.373 [2024-12-14 06:46:30.291025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.373 [2024-12-14 06:46:30.294883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.373 [2024-12-14 06:46:30.294999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.373 [2024-12-14 06:46:30.295028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.373 [2024-12-14 06:46:30.298821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.373 [2024-12-14 06:46:30.298872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.373 [2024-12-14 06:46:30.298955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.373 [2024-12-14 06:46:30.302859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.373 [2024-12-14 06:46:30.302959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.373 [2024-12-14 06:46:30.302988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.373 [2024-12-14 06:46:30.307022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.373 [2024-12-14 06:46:30.307059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.374 [2024-12-14 06:46:30.307087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.374 [2024-12-14 06:46:30.311144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.374 [2024-12-14 06:46:30.311181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.374 [2024-12-14 06:46:30.311209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.374 [2024-12-14 06:46:30.315472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.374 [2024-12-14 06:46:30.315539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.374 [2024-12-14 06:46:30.315567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.374 [2024-12-14 06:46:30.319910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.374 [2024-12-14 06:46:30.319959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.374 [2024-12-14 06:46:30.319987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.374 [2024-12-14 06:46:30.324523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.374 [2024-12-14 06:46:30.324592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.374 [2024-12-14 06:46:30.324621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.374 [2024-12-14 06:46:30.329547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.374 [2024-12-14 06:46:30.329599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.374 [2024-12-14 06:46:30.329627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.374 [2024-12-14 06:46:30.334050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.374 [2024-12-14 06:46:30.334088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.374 [2024-12-14 06:46:30.334117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.374 [2024-12-14 06:46:30.338574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.374 [2024-12-14 06:46:30.338625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.374 [2024-12-14 06:46:30.338652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.374 [2024-12-14 06:46:30.343147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.374 [2024-12-14 06:46:30.343188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.374 [2024-12-14 06:46:30.343217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.374 [2024-12-14 06:46:30.347723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.374 [2024-12-14 06:46:30.347775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.374 [2024-12-14 06:46:30.347803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.374 [2024-12-14 06:46:30.352140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.374 [2024-12-14 06:46:30.352182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.374 [2024-12-14 06:46:30.352210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.374 [2024-12-14 06:46:30.356372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.374 [2024-12-14 06:46:30.356422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.374 [2024-12-14 06:46:30.356450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.374 [2024-12-14 06:46:30.360875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.374 [2024-12-14 06:46:30.360951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.374 [2024-12-14 06:46:30.360979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.635 [2024-12-14 06:46:30.365216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.635 [2024-12-14 06:46:30.365266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.635 [2024-12-14 06:46:30.365295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.635 [2024-12-14 06:46:30.369531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.635 [2024-12-14 06:46:30.369582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.635 [2024-12-14 06:46:30.369609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.635 [2024-12-14 06:46:30.373755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.635 [2024-12-14 06:46:30.373807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.635 [2024-12-14 06:46:30.373835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.635 [2024-12-14 06:46:30.378113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.635 [2024-12-14 06:46:30.378164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.635 [2024-12-14 06:46:30.378192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.635 [2024-12-14 06:46:30.382243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.635 [2024-12-14 06:46:30.382293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.635 [2024-12-14 06:46:30.382321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.635 [2024-12-14 06:46:30.386283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.635 [2024-12-14 06:46:30.386335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.635 [2024-12-14 06:46:30.386363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.635 [2024-12-14 06:46:30.390430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.635 [2024-12-14 06:46:30.390483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.635 [2024-12-14 06:46:30.390511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.635 [2024-12-14 06:46:30.394559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.635 [2024-12-14 06:46:30.394611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.635 [2024-12-14 06:46:30.394639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.635 [2024-12-14 06:46:30.398707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.635 [2024-12-14 06:46:30.398759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.635 [2024-12-14 06:46:30.398788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.635 [2024-12-14 06:46:30.403025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.635 [2024-12-14 06:46:30.403067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.635 [2024-12-14 06:46:30.403080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.635 [2024-12-14 06:46:30.407061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.635 [2024-12-14 06:46:30.407101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.635 [2024-12-14 06:46:30.407130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.635 [2024-12-14 06:46:30.411342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.635 [2024-12-14 06:46:30.411392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.635 [2024-12-14 06:46:30.411419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.635 [2024-12-14 06:46:30.415470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.635 [2024-12-14 06:46:30.415521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.635 [2024-12-14 06:46:30.415549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.635 [2024-12-14 06:46:30.419918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.635 [2024-12-14 06:46:30.419980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.635 [2024-12-14 06:46:30.420008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.635 [2024-12-14 06:46:30.424427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.635 [2024-12-14 06:46:30.424480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.635 [2024-12-14 06:46:30.424523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.635 [2024-12-14 06:46:30.428612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.635 [2024-12-14 06:46:30.428664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.635 [2024-12-14 06:46:30.428693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.635 [2024-12-14 06:46:30.432877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.635 [2024-12-14 06:46:30.432937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.635 [2024-12-14 06:46:30.432966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.635 [2024-12-14 06:46:30.437033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.635 [2024-12-14 06:46:30.437083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.635 [2024-12-14 06:46:30.437111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.635 [2024-12-14 06:46:30.441141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.635 [2024-12-14 06:46:30.441191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.635 [2024-12-14 06:46:30.441219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.635 [2024-12-14 06:46:30.445225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.635 [2024-12-14 06:46:30.445275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.635 [2024-12-14 06:46:30.445302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.635 [2024-12-14 06:46:30.449373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.635 [2024-12-14 06:46:30.449424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.635 [2024-12-14 06:46:30.449452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.635 [2024-12-14 06:46:30.453405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.635 [2024-12-14 06:46:30.453457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.635 [2024-12-14 06:46:30.453484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.635 [2024-12-14 06:46:30.457459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.635 [2024-12-14 06:46:30.457509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.635 [2024-12-14 06:46:30.457536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.635 [2024-12-14 06:46:30.461651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.635 [2024-12-14 06:46:30.461703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.635 [2024-12-14 06:46:30.461731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.635 [2024-12-14 06:46:30.466322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.635 [2024-12-14 06:46:30.466392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.636 [2024-12-14 06:46:30.466420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.636 [2024-12-14 06:46:30.470605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.636 [2024-12-14 06:46:30.470656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.636 [2024-12-14 06:46:30.470684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.636 [2024-12-14 06:46:30.474771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.636 [2024-12-14 06:46:30.474821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.636 [2024-12-14 06:46:30.474849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.636 [2024-12-14 06:46:30.479051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.636 [2024-12-14 06:46:30.479088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.636 [2024-12-14 06:46:30.479117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.636 [2024-12-14 06:46:30.483109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.636 [2024-12-14 06:46:30.483147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.636 [2024-12-14 06:46:30.483176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.636 [2024-12-14 06:46:30.487138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.636 [2024-12-14 06:46:30.487175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.636 [2024-12-14 06:46:30.487203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.636 [2024-12-14 06:46:30.491077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.636 [2024-12-14 06:46:30.491115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.636 [2024-12-14 06:46:30.491143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.636 [2024-12-14 06:46:30.495084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.636 [2024-12-14 06:46:30.495122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.636 [2024-12-14 06:46:30.495151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.636 [2024-12-14 06:46:30.499150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.636 [2024-12-14 06:46:30.499189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.636 [2024-12-14 06:46:30.499218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.636 [2024-12-14 06:46:30.503458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.636 [2024-12-14 06:46:30.503508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.636 [2024-12-14 06:46:30.503536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.636 [2024-12-14 06:46:30.507663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.636 [2024-12-14 06:46:30.507713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.636 [2024-12-14 06:46:30.507740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.636 [2024-12-14 06:46:30.511787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.636 [2024-12-14 06:46:30.511837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.636 [2024-12-14 06:46:30.511864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.636 [2024-12-14 06:46:30.515836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.636 [2024-12-14 06:46:30.515915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.636 [2024-12-14 06:46:30.515928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.636 [2024-12-14 06:46:30.519991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.636 [2024-12-14 06:46:30.520043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.636 [2024-12-14 06:46:30.520070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.636 [2024-12-14 06:46:30.524192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.636 [2024-12-14 06:46:30.524243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.636 [2024-12-14 06:46:30.524271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.636 [2024-12-14 06:46:30.528412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.636 [2024-12-14 06:46:30.528462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.636 [2024-12-14 06:46:30.528489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.636 [2024-12-14 06:46:30.532702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.636 [2024-12-14 06:46:30.532754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.636 [2024-12-14 06:46:30.532781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.636 [2024-12-14 06:46:30.537238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.636 [2024-12-14 06:46:30.537290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.636 [2024-12-14 06:46:30.537333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.636 [2024-12-14 06:46:30.541714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.636 [2024-12-14 06:46:30.541767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.636 [2024-12-14 06:46:30.541796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.636 [2024-12-14 06:46:30.546152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.636 [2024-12-14 06:46:30.546205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.636 [2024-12-14 06:46:30.546234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.636 [2024-12-14 06:46:30.550803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.636 [2024-12-14 06:46:30.550885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.636 [2024-12-14 06:46:30.550947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.636 [2024-12-14 06:46:30.555673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.636 [2024-12-14 06:46:30.555723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.636 [2024-12-14 06:46:30.555750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.636 [2024-12-14 06:46:30.560435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.636 [2024-12-14 06:46:30.560487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.636 [2024-12-14 06:46:30.560514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.636 [2024-12-14 06:46:30.565090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.636 [2024-12-14 06:46:30.565145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.636 [2024-12-14 06:46:30.565159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.636 [2024-12-14 06:46:30.569870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.636 [2024-12-14 06:46:30.569944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.636 [2024-12-14 06:46:30.569989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.636 [2024-12-14 06:46:30.574451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.636 [2024-12-14 06:46:30.574487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.636 [2024-12-14 06:46:30.574515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.636 [2024-12-14 06:46:30.579240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.636 [2024-12-14 06:46:30.579298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.636 [2024-12-14 06:46:30.579310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.636 [2024-12-14 06:46:30.583869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.636 [2024-12-14 06:46:30.583933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.636 [2024-12-14 06:46:30.583978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.637 [2024-12-14 06:46:30.588795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.637 [2024-12-14 06:46:30.588846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.637 [2024-12-14 06:46:30.588874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.637 [2024-12-14 06:46:30.593848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.637 [2024-12-14 06:46:30.593910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.637 [2024-12-14 06:46:30.593940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.637 [2024-12-14 06:46:30.599045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.637 [2024-12-14 06:46:30.599085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.637 [2024-12-14 06:46:30.599098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.637 [2024-12-14 06:46:30.603697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.637 [2024-12-14 06:46:30.603750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.637 [2024-12-14 06:46:30.603778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.637 [2024-12-14 06:46:30.608621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.637 [2024-12-14 06:46:30.608676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.637 [2024-12-14 06:46:30.608720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.637 [2024-12-14 06:46:30.613158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.637 [2024-12-14 06:46:30.613212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.637 [2024-12-14 06:46:30.613241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.637 [2024-12-14 06:46:30.617729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.637 [2024-12-14 06:46:30.617780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.637 [2024-12-14 06:46:30.617808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.637 [2024-12-14 06:46:30.622807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.637 [2024-12-14 06:46:30.622870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.637 [2024-12-14 06:46:30.622953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.939 [2024-12-14 06:46:30.627589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.939 [2024-12-14 06:46:30.627641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.939 [2024-12-14 06:46:30.627669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.939 [2024-12-14 06:46:30.632260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.939 [2024-12-14 06:46:30.632343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.939 [2024-12-14 06:46:30.632371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.939 [2024-12-14 06:46:30.636767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.939 [2024-12-14 06:46:30.636819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.939 [2024-12-14 06:46:30.636848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.939 [2024-12-14 06:46:30.641452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.939 [2024-12-14 06:46:30.641492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.939 [2024-12-14 06:46:30.641522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.939 [2024-12-14 06:46:30.646354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.646408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.646436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.651084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.651125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.651138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.655720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.655773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.655802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.660413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.660450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.660478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.664882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.664964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.664994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.669622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.669660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.669688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.674221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.674274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.674317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.678783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.678820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.678849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.683524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.683562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.683591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.688136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.688174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.688202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.692668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.692706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.692735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.697230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.697268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.697298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.701761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.701798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.701826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.706109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.706146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.706175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.710576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.710613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.710641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.715170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.715211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.715239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.719717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.719754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.719782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.724425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.724462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.724490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.728905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.728966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.728998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.733223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.733274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.733302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.737600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.737637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.737666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.742083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.742119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.742147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.746300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.746337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.746364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.750700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.750737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.750764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.755258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.755309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.755337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.759554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.759592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.759620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.763909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.763955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.763999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.768305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.768342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.768383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.772560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.772596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.772623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.777006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.777042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.777070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.781423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.781460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.781488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.785703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.785740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.785768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.790093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.790128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.790156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.794635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.794671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.794700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.798980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.799020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.799033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.803406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.803458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.940 [2024-12-14 06:46:30.803485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.940 [2024-12-14 06:46:30.807706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.940 [2024-12-14 06:46:30.807757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.941 [2024-12-14 06:46:30.807785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.941 [2024-12-14 06:46:30.811959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.941 [2024-12-14 06:46:30.812042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.941 [2024-12-14 06:46:30.812072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.941 [2024-12-14 06:46:30.816530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.941 [2024-12-14 06:46:30.816582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.941 [2024-12-14 06:46:30.816610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.941 [2024-12-14 06:46:30.821203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.941 [2024-12-14 06:46:30.821258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.941 [2024-12-14 06:46:30.821301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.941 [2024-12-14 06:46:30.825790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.941 [2024-12-14 06:46:30.825843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.941 [2024-12-14 06:46:30.825872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.941 [2024-12-14 06:46:30.830593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.941 [2024-12-14 06:46:30.830647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.941 [2024-12-14 06:46:30.830675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.941 [2024-12-14 06:46:30.835212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.941 [2024-12-14 06:46:30.835253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.941 [2024-12-14 06:46:30.835268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.941 [2024-12-14 06:46:30.839816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.941 [2024-12-14 06:46:30.839869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.941 [2024-12-14 06:46:30.839909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.941 [2024-12-14 06:46:30.844523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.941 [2024-12-14 06:46:30.844580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.941 [2024-12-14 06:46:30.844608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.941 [2024-12-14 06:46:30.848996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.941 [2024-12-14 06:46:30.849051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.941 [2024-12-14 06:46:30.849081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.941 [2024-12-14 06:46:30.853475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.941 [2024-12-14 06:46:30.853527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.941 [2024-12-14 06:46:30.853554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.941 [2024-12-14 06:46:30.857841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.941 [2024-12-14 06:46:30.857919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.941 [2024-12-14 06:46:30.857964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.941 [2024-12-14 06:46:30.862671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.941 [2024-12-14 06:46:30.862743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.941 [2024-12-14 06:46:30.862757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.941 [2024-12-14 06:46:30.867433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.941 [2024-12-14 06:46:30.867505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.941 [2024-12-14 06:46:30.867520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.941 [2024-12-14 06:46:30.872228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.941 [2024-12-14 06:46:30.872267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.941 [2024-12-14 06:46:30.872280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.941 [2024-12-14 06:46:30.877086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.941 [2024-12-14 06:46:30.877141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.941 [2024-12-14 06:46:30.877180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.941 [2024-12-14 06:46:30.881424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.941 [2024-12-14 06:46:30.881475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.941 [2024-12-14 06:46:30.881503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.941 [2024-12-14 06:46:30.886281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.941 [2024-12-14 06:46:30.886350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.941 [2024-12-14 06:46:30.886380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.941 [2024-12-14 06:46:30.891065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.941 [2024-12-14 06:46:30.891106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.941 [2024-12-14 06:46:30.891120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.941 [2024-12-14 06:46:30.895826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.941 [2024-12-14 06:46:30.895923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.941 [2024-12-14 06:46:30.895938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.941 [2024-12-14 06:46:30.900110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.941 [2024-12-14 06:46:30.900160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.941 [2024-12-14 06:46:30.900188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.941 [2024-12-14 06:46:30.904349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.941 [2024-12-14 06:46:30.904401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.941 [2024-12-14 06:46:30.904429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.941 [2024-12-14 06:46:30.908601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.941 [2024-12-14 06:46:30.908654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.941 [2024-12-14 06:46:30.908681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.941 [2024-12-14 06:46:30.912872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.941 [2024-12-14 06:46:30.912933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.941 [2024-12-14 06:46:30.912961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.941 [2024-12-14 06:46:30.917030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.941 [2024-12-14 06:46:30.917082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.941 [2024-12-14 06:46:30.917109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.941 [2024-12-14 06:46:30.921151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.941 [2024-12-14 06:46:30.921202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.941 [2024-12-14 06:46:30.921229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.941 [2024-12-14 06:46:30.925340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:16.941 [2024-12-14 06:46:30.925391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.941 [2024-12-14 06:46:30.925419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.202 [2024-12-14 06:46:30.929640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.202 [2024-12-14 06:46:30.929692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.202 [2024-12-14 06:46:30.929721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.202 [2024-12-14 06:46:30.933825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.202 [2024-12-14 06:46:30.933876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.202 [2024-12-14 06:46:30.933915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.202 [2024-12-14 06:46:30.938265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.202 [2024-12-14 06:46:30.938318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.202 [2024-12-14 06:46:30.938346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.202 [2024-12-14 06:46:30.942276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.202 [2024-12-14 06:46:30.942328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.202 [2024-12-14 06:46:30.942356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.202 [2024-12-14 06:46:30.946480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.202 [2024-12-14 06:46:30.946534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.202 [2024-12-14 06:46:30.946561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.202 [2024-12-14 06:46:30.950610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.202 [2024-12-14 06:46:30.950678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.202 [2024-12-14 06:46:30.950707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.202 [2024-12-14 06:46:30.954790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.202 [2024-12-14 06:46:30.954841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.202 [2024-12-14 06:46:30.954869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.202 [2024-12-14 06:46:30.958954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.202 [2024-12-14 06:46:30.958992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.202 [2024-12-14 06:46:30.959005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.202 [2024-12-14 06:46:30.962959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.202 [2024-12-14 06:46:30.963007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.202 [2024-12-14 06:46:30.963034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.202 [2024-12-14 06:46:30.967142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.202 [2024-12-14 06:46:30.967181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.202 [2024-12-14 06:46:30.967210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.202 [2024-12-14 06:46:30.971326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.202 [2024-12-14 06:46:30.971376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.202 [2024-12-14 06:46:30.971404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.202 [2024-12-14 06:46:30.975525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.202 [2024-12-14 06:46:30.975577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.202 [2024-12-14 06:46:30.975604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.202 [2024-12-14 06:46:30.979684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.202 [2024-12-14 06:46:30.979735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.202 [2024-12-14 06:46:30.979763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.202 [2024-12-14 06:46:30.983855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.202 [2024-12-14 06:46:30.983919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.202 [2024-12-14 06:46:30.983947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.202 [2024-12-14 06:46:30.987989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.202 [2024-12-14 06:46:30.988039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.202 [2024-12-14 06:46:30.988068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.202 [2024-12-14 06:46:30.992206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.202 [2024-12-14 06:46:30.992258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.202 [2024-12-14 06:46:30.992285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.202 [2024-12-14 06:46:30.996401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.202 [2024-12-14 06:46:30.996453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.202 [2024-12-14 06:46:30.996481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.202 [2024-12-14 06:46:31.000411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.202 [2024-12-14 06:46:31.000462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.202 [2024-12-14 06:46:31.000489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.202 [2024-12-14 06:46:31.004580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.202 [2024-12-14 06:46:31.004633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.202 [2024-12-14 06:46:31.004661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.202 [2024-12-14 06:46:31.008741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.202 [2024-12-14 06:46:31.008792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.202 [2024-12-14 06:46:31.008820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.202 [2024-12-14 06:46:31.012903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.202 [2024-12-14 06:46:31.012964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.202 [2024-12-14 06:46:31.012991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.202 [2024-12-14 06:46:31.016987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.202 [2024-12-14 06:46:31.017037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.202 [2024-12-14 06:46:31.017064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.202 [2024-12-14 06:46:31.021102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.202 [2024-12-14 06:46:31.021153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.202 [2024-12-14 06:46:31.021181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.202 [2024-12-14 06:46:31.025428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.202 [2024-12-14 06:46:31.025481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.202 [2024-12-14 06:46:31.025510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.202 [2024-12-14 06:46:31.029749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.202 [2024-12-14 06:46:31.029805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.202 [2024-12-14 06:46:31.029833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.202 [2024-12-14 06:46:31.033989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.202 [2024-12-14 06:46:31.034040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.203 [2024-12-14 06:46:31.034068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.203 [2024-12-14 06:46:31.038005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.203 [2024-12-14 06:46:31.038058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.203 [2024-12-14 06:46:31.038086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.203 [2024-12-14 06:46:31.042057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.203 [2024-12-14 06:46:31.042108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.203 [2024-12-14 06:46:31.042136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.203 [2024-12-14 06:46:31.046110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.203 [2024-12-14 06:46:31.046160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.203 [2024-12-14 06:46:31.046189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.203 [2024-12-14 06:46:31.050140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.203 [2024-12-14 06:46:31.050191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.203 [2024-12-14 06:46:31.050219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.203 [2024-12-14 06:46:31.054207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.203 [2024-12-14 06:46:31.054259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.203 [2024-12-14 06:46:31.054287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.203 [2024-12-14 06:46:31.058230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.203 [2024-12-14 06:46:31.058281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.203 [2024-12-14 06:46:31.058308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.203 [2024-12-14 06:46:31.062277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.203 [2024-12-14 06:46:31.062329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.203 [2024-12-14 06:46:31.062356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.203 [2024-12-14 06:46:31.066312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.203 [2024-12-14 06:46:31.066364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.203 [2024-12-14 06:46:31.066391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.203 [2024-12-14 06:46:31.070441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.203 [2024-12-14 06:46:31.070493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.203 [2024-12-14 06:46:31.070521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.203 [2024-12-14 06:46:31.074686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.203 [2024-12-14 06:46:31.074739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.203 [2024-12-14 06:46:31.074751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.203 [2024-12-14 06:46:31.078953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.203 [2024-12-14 06:46:31.078988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.203 [2024-12-14 06:46:31.079017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.203 [2024-12-14 06:46:31.083030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.203 [2024-12-14 06:46:31.083072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.203 [2024-12-14 06:46:31.083085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.203 [2024-12-14 06:46:31.087107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.203 [2024-12-14 06:46:31.087149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.203 [2024-12-14 06:46:31.087178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.203 [2024-12-14 06:46:31.091171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.203 [2024-12-14 06:46:31.091212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.203 [2024-12-14 06:46:31.091241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.203 [2024-12-14 06:46:31.095396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.203 [2024-12-14 06:46:31.095446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.203 [2024-12-14 06:46:31.095473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.203 [2024-12-14 06:46:31.099432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.203 [2024-12-14 06:46:31.099483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.203 [2024-12-14 06:46:31.099510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.203 [2024-12-14 06:46:31.103506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.203 [2024-12-14 06:46:31.103558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.203 [2024-12-14 06:46:31.103595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.203 [2024-12-14 06:46:31.108338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.203 [2024-12-14 06:46:31.108392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.203 [2024-12-14 06:46:31.108421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.203 [2024-12-14 06:46:31.113081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.203 [2024-12-14 06:46:31.113135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.203 [2024-12-14 06:46:31.113164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.203 [2024-12-14 06:46:31.117679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.203 [2024-12-14 06:46:31.117730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.203 [2024-12-14 06:46:31.117758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.203 [2024-12-14 06:46:31.121881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.203 [2024-12-14 06:46:31.121942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.203 [2024-12-14 06:46:31.121970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.203 [2024-12-14 06:46:31.125968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.203 [2024-12-14 06:46:31.126019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.203 [2024-12-14 06:46:31.126047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.203 [2024-12-14 06:46:31.130018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.203 [2024-12-14 06:46:31.130069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.203 [2024-12-14 06:46:31.130096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.203 [2024-12-14 06:46:31.134449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.203 [2024-12-14 06:46:31.134518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.203 [2024-12-14 06:46:31.134547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.203 [2024-12-14 06:46:31.139099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.203 [2024-12-14 06:46:31.139150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.203 [2024-12-14 06:46:31.139179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.203 [2024-12-14 06:46:31.143319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.203 [2024-12-14 06:46:31.143371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.203 [2024-12-14 06:46:31.143399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.203 [2024-12-14 06:46:31.147382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.203 [2024-12-14 06:46:31.147433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.203 [2024-12-14 06:46:31.147461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.203 [2024-12-14 06:46:31.151499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.204 [2024-12-14 06:46:31.151550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.204 [2024-12-14 06:46:31.151577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.204 [2024-12-14 06:46:31.155609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.204 [2024-12-14 06:46:31.155659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.204 [2024-12-14 06:46:31.155696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.204 [2024-12-14 06:46:31.159754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.204 [2024-12-14 06:46:31.159805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.204 [2024-12-14 06:46:31.159832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.204 [2024-12-14 06:46:31.163960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.204 [2024-12-14 06:46:31.164023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.204 [2024-12-14 06:46:31.164051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.204 [2024-12-14 06:46:31.167966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.204 [2024-12-14 06:46:31.168017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.204 [2024-12-14 06:46:31.168044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.204 [2024-12-14 06:46:31.172051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.204 [2024-12-14 06:46:31.172102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.204 [2024-12-14 06:46:31.172130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.204 [2024-12-14 06:46:31.176222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.204 [2024-12-14 06:46:31.176274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.204 [2024-12-14 06:46:31.176302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.204 [2024-12-14 06:46:31.180351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.204 [2024-12-14 06:46:31.180404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.204 [2024-12-14 06:46:31.180431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.204 [2024-12-14 06:46:31.184521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.204 [2024-12-14 06:46:31.184572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.204 [2024-12-14 06:46:31.184600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.204 [2024-12-14 06:46:31.189137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.204 [2024-12-14 06:46:31.189191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.204 [2024-12-14 06:46:31.189220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.464 [2024-12-14 06:46:31.193617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.464 [2024-12-14 06:46:31.193670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.464 [2024-12-14 06:46:31.193698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.464 [2024-12-14 06:46:31.197982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.464 [2024-12-14 06:46:31.198036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.464 [2024-12-14 06:46:31.198065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.464 [2024-12-14 06:46:31.202095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.464 [2024-12-14 06:46:31.202148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.464 [2024-12-14 06:46:31.202176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.464 [2024-12-14 06:46:31.206057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.464 [2024-12-14 06:46:31.206106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.464 [2024-12-14 06:46:31.206133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.464 [2024-12-14 06:46:31.210184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.464 [2024-12-14 06:46:31.210235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.464 [2024-12-14 06:46:31.210263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.464 [2024-12-14 06:46:31.214248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.464 [2024-12-14 06:46:31.214300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.464 [2024-12-14 06:46:31.214328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.464 [2024-12-14 06:46:31.218377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.464 [2024-12-14 06:46:31.218429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.464 [2024-12-14 06:46:31.218456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.464 [2024-12-14 06:46:31.222536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.464 [2024-12-14 06:46:31.222588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.464 [2024-12-14 06:46:31.222616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.464 [2024-12-14 06:46:31.226650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.464 [2024-12-14 06:46:31.226702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.464 [2024-12-14 06:46:31.226729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.464 [2024-12-14 06:46:31.230627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.464 [2024-12-14 06:46:31.230664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.464 [2024-12-14 06:46:31.230692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.464 [2024-12-14 06:46:31.234696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.464 [2024-12-14 06:46:31.234731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.464 [2024-12-14 06:46:31.234760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.464 [2024-12-14 06:46:31.238698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.464 [2024-12-14 06:46:31.238732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.464 [2024-12-14 06:46:31.238761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.464 [2024-12-14 06:46:31.242786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.464 [2024-12-14 06:46:31.242821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.464 [2024-12-14 06:46:31.242849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.464 [2024-12-14 06:46:31.247037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.464 [2024-12-14 06:46:31.247076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.464 [2024-12-14 06:46:31.247090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.464 [2024-12-14 06:46:31.251016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.464 [2024-12-14 06:46:31.251053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.464 [2024-12-14 06:46:31.251066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.464 [2024-12-14 06:46:31.254994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.464 [2024-12-14 06:46:31.255030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.464 [2024-12-14 06:46:31.255059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.464 [2024-12-14 06:46:31.258988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.464 [2024-12-14 06:46:31.259024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.464 [2024-12-14 06:46:31.259036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.464 [2024-12-14 06:46:31.262979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.464 [2024-12-14 06:46:31.263015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.464 [2024-12-14 06:46:31.263044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.465 [2024-12-14 06:46:31.266870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.465 [2024-12-14 06:46:31.267119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.465 [2024-12-14 06:46:31.267153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.465 [2024-12-14 06:46:31.271198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.465 [2024-12-14 06:46:31.271408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.465 [2024-12-14 06:46:31.271561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.465 [2024-12-14 06:46:31.275798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.465 [2024-12-14 06:46:31.276008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.465 [2024-12-14 06:46:31.276162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.465 [2024-12-14 06:46:31.280498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.465 [2024-12-14 06:46:31.280687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.465 [2024-12-14 06:46:31.280838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.465 [2024-12-14 06:46:31.285031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.465 [2024-12-14 06:46:31.285237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.465 [2024-12-14 06:46:31.285488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.465 [2024-12-14 06:46:31.289732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.465 [2024-12-14 06:46:31.289950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.465 [2024-12-14 06:46:31.290147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.465 [2024-12-14 06:46:31.294116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.465 [2024-12-14 06:46:31.294321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.465 [2024-12-14 06:46:31.294573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.465 [2024-12-14 06:46:31.298715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.465 [2024-12-14 06:46:31.298958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.465 [2024-12-14 06:46:31.299177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.465 [2024-12-14 06:46:31.303403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.465 [2024-12-14 06:46:31.303584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.465 [2024-12-14 06:46:31.303731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.465 [2024-12-14 06:46:31.307956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.465 [2024-12-14 06:46:31.308120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.465 [2024-12-14 06:46:31.308152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.465 [2024-12-14 06:46:31.312209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.465 [2024-12-14 06:46:31.312245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.465 [2024-12-14 06:46:31.312275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.465 [2024-12-14 06:46:31.316437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.465 [2024-12-14 06:46:31.316473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.465 [2024-12-14 06:46:31.316501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.465 [2024-12-14 06:46:31.320585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.465 [2024-12-14 06:46:31.320622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.465 [2024-12-14 06:46:31.320650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.465 [2024-12-14 06:46:31.324822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.465 [2024-12-14 06:46:31.324858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.465 [2024-12-14 06:46:31.324886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.465 [2024-12-14 06:46:31.328962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.465 [2024-12-14 06:46:31.328997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.465 [2024-12-14 06:46:31.329025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.465 [2024-12-14 06:46:31.333030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.465 [2024-12-14 06:46:31.333066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.465 [2024-12-14 06:46:31.333094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.465 [2024-12-14 06:46:31.337381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.465 [2024-12-14 06:46:31.337418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.465 [2024-12-14 06:46:31.337446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.465 [2024-12-14 06:46:31.341731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.465 [2024-12-14 06:46:31.341767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.465 [2024-12-14 06:46:31.341796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.465 [2024-12-14 06:46:31.346052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.465 [2024-12-14 06:46:31.346088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.465 [2024-12-14 06:46:31.346116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.465 [2024-12-14 06:46:31.350502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.465 [2024-12-14 06:46:31.350541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.465 [2024-12-14 06:46:31.350570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.465 [2024-12-14 06:46:31.355130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.465 [2024-12-14 06:46:31.355171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.465 [2024-12-14 06:46:31.355185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.465 [2024-12-14 06:46:31.359495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.465 [2024-12-14 06:46:31.359530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.465 [2024-12-14 06:46:31.359559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.465 [2024-12-14 06:46:31.363806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.465 [2024-12-14 06:46:31.363842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.465 [2024-12-14 06:46:31.363871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.465 [2024-12-14 06:46:31.368408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.465 [2024-12-14 06:46:31.368444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.465 [2024-12-14 06:46:31.368473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.465 [2024-12-14 06:46:31.372852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.465 [2024-12-14 06:46:31.372914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.465 [2024-12-14 06:46:31.372959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.465 [2024-12-14 06:46:31.377184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.465 [2024-12-14 06:46:31.377221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.465 [2024-12-14 06:46:31.377250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.465 [2024-12-14 06:46:31.381417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.465 [2024-12-14 06:46:31.381610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.465 [2024-12-14 06:46:31.381643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.465 [2024-12-14 06:46:31.385789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.466 [2024-12-14 06:46:31.385826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.466 [2024-12-14 06:46:31.385855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.466 [2024-12-14 06:46:31.390027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.466 [2024-12-14 06:46:31.390063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.466 [2024-12-14 06:46:31.390093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.466 [2024-12-14 06:46:31.394490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.466 [2024-12-14 06:46:31.394527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.466 [2024-12-14 06:46:31.394557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.466 [2024-12-14 06:46:31.398664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.466 [2024-12-14 06:46:31.398699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.466 [2024-12-14 06:46:31.398728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.466 [2024-12-14 06:46:31.402798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.466 [2024-12-14 06:46:31.402835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.466 [2024-12-14 06:46:31.402863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.466 [2024-12-14 06:46:31.406840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.466 [2024-12-14 06:46:31.406876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.466 [2024-12-14 06:46:31.406957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.466 [2024-12-14 06:46:31.411126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.466 [2024-12-14 06:46:31.411166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.466 [2024-12-14 06:46:31.411181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.466 [2024-12-14 06:46:31.415383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.466 [2024-12-14 06:46:31.415418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.466 [2024-12-14 06:46:31.415447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.466 [2024-12-14 06:46:31.419686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.466 [2024-12-14 06:46:31.419722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.466 [2024-12-14 06:46:31.419751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.466 [2024-12-14 06:46:31.424004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.466 [2024-12-14 06:46:31.424039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.466 [2024-12-14 06:46:31.424068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.466 [2024-12-14 06:46:31.428090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.466 [2024-12-14 06:46:31.428125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.466 [2024-12-14 06:46:31.428153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.466 [2024-12-14 06:46:31.432198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.466 [2024-12-14 06:46:31.432233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.466 [2024-12-14 06:46:31.432261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.466 [2024-12-14 06:46:31.436317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.466 [2024-12-14 06:46:31.436353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.466 [2024-12-14 06:46:31.436381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.466 [2024-12-14 06:46:31.440431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.466 [2024-12-14 06:46:31.440467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.466 [2024-12-14 06:46:31.440495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.466 [2024-12-14 06:46:31.444614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.466 [2024-12-14 06:46:31.444651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.466 [2024-12-14 06:46:31.444681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.466 [2024-12-14 06:46:31.448817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.466 [2024-12-14 06:46:31.448853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.466 [2024-12-14 06:46:31.448882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.725 [2024-12-14 06:46:31.453374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.725 [2024-12-14 06:46:31.453411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.725 [2024-12-14 06:46:31.453440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.725 [2024-12-14 06:46:31.457694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.725 [2024-12-14 06:46:31.457730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.725 [2024-12-14 06:46:31.457759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.725 [2024-12-14 06:46:31.462002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.725 [2024-12-14 06:46:31.462037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.725 [2024-12-14 06:46:31.462066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.725 [2024-12-14 06:46:31.466116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.725 [2024-12-14 06:46:31.466152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.725 [2024-12-14 06:46:31.466180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.725 [2024-12-14 06:46:31.470234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.725 [2024-12-14 06:46:31.470285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.725 [2024-12-14 06:46:31.470314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.725 [2024-12-14 06:46:31.474334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.725 [2024-12-14 06:46:31.474371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.725 [2024-12-14 06:46:31.474399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.725 [2024-12-14 06:46:31.478438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.725 [2024-12-14 06:46:31.478473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.725 [2024-12-14 06:46:31.478503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.725 [2024-12-14 06:46:31.482522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.725 [2024-12-14 06:46:31.482559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.725 [2024-12-14 06:46:31.482588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.725 [2024-12-14 06:46:31.486714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.725 [2024-12-14 06:46:31.486750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.725 [2024-12-14 06:46:31.486778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.725 [2024-12-14 06:46:31.490807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.725 [2024-12-14 06:46:31.490843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.725 [2024-12-14 06:46:31.490872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.725 [2024-12-14 06:46:31.494993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.725 [2024-12-14 06:46:31.495030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.725 [2024-12-14 06:46:31.495060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.726 [2024-12-14 06:46:31.499081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.726 [2024-12-14 06:46:31.499119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.726 [2024-12-14 06:46:31.499133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.726 [2024-12-14 06:46:31.503192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.726 [2024-12-14 06:46:31.503232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.726 [2024-12-14 06:46:31.503260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.726 [2024-12-14 06:46:31.507412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.726 [2024-12-14 06:46:31.507448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.726 [2024-12-14 06:46:31.507476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.726 [2024-12-14 06:46:31.511606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.726 [2024-12-14 06:46:31.511642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.726 [2024-12-14 06:46:31.511671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.726 [2024-12-14 06:46:31.515896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.726 [2024-12-14 06:46:31.515962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.726 [2024-12-14 06:46:31.515992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.726 [2024-12-14 06:46:31.520152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.726 [2024-12-14 06:46:31.520188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.726 [2024-12-14 06:46:31.520216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.726 [2024-12-14 06:46:31.524289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.726 [2024-12-14 06:46:31.524324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.726 [2024-12-14 06:46:31.524353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.726 [2024-12-14 06:46:31.528402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.726 [2024-12-14 06:46:31.528439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.726 [2024-12-14 06:46:31.528467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.726 [2024-12-14 06:46:31.532502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.726 [2024-12-14 06:46:31.532537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.726 [2024-12-14 06:46:31.532565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.726 [2024-12-14 06:46:31.536594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.726 [2024-12-14 06:46:31.536629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.726 [2024-12-14 06:46:31.536658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.726 [2024-12-14 06:46:31.540822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.726 [2024-12-14 06:46:31.540858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.726 [2024-12-14 06:46:31.540888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.726 [2024-12-14 06:46:31.545115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.726 [2024-12-14 06:46:31.545151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.726 [2024-12-14 06:46:31.545180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.726 [2024-12-14 06:46:31.549266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.726 [2024-12-14 06:46:31.549302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.726 [2024-12-14 06:46:31.549331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.726 [2024-12-14 06:46:31.553335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.726 [2024-12-14 06:46:31.553386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.726 [2024-12-14 06:46:31.553415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.726 [2024-12-14 06:46:31.557450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.726 [2024-12-14 06:46:31.557486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.726 [2024-12-14 06:46:31.557515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.726 [2024-12-14 06:46:31.561761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.726 [2024-12-14 06:46:31.561797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.726 [2024-12-14 06:46:31.561826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.726 [2024-12-14 06:46:31.565956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.726 [2024-12-14 06:46:31.565992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.726 [2024-12-14 06:46:31.566020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.726 [2024-12-14 06:46:31.570104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.726 [2024-12-14 06:46:31.570140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.726 [2024-12-14 06:46:31.570169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.726 [2024-12-14 06:46:31.574561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.726 [2024-12-14 06:46:31.574597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.726 [2024-12-14 06:46:31.574625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:17.726 [2024-12-14 06:46:31.579019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.726 [2024-12-14 06:46:31.579059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.726 [2024-12-14 06:46:31.579074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:17.726 [2024-12-14 06:46:31.583418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.726 [2024-12-14 06:46:31.583453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.726 [2024-12-14 06:46:31.583481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.726 [2024-12-14 06:46:31.587582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16fe940) 00:16:17.726 [2024-12-14 06:46:31.587618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:17.726 [2024-12-14 06:46:31.587647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:17.726 00:16:17.726 Latency(us) 00:16:17.726 [2024-12-14T06:46:31.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.726 [2024-12-14T06:46:31.718Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:17.726 nvme0n1 : 2.00 7223.33 902.92 0.00 0.00 2212.11 1742.66 10783.65 00:16:17.726 [2024-12-14T06:46:31.718Z] =================================================================================================================== 00:16:17.726 [2024-12-14T06:46:31.718Z] Total : 7223.33 902.92 0.00 0.00 2212.11 1742.66 10783.65 00:16:17.726 0 00:16:17.726 06:46:31 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:17.726 06:46:31 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:17.726 06:46:31 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:17.726 | .driver_specific 00:16:17.726 | .nvme_error 00:16:17.726 | .status_code 00:16:17.726 | .command_transient_transport_error' 00:16:17.726 06:46:31 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:17.986 06:46:31 -- host/digest.sh@71 -- # (( 466 > 0 )) 00:16:17.986 06:46:31 -- host/digest.sh@73 -- # killprocess 71996 00:16:17.986 06:46:31 -- common/autotest_common.sh@936 -- # '[' -z 71996 ']' 00:16:17.986 06:46:31 -- common/autotest_common.sh@940 -- # kill -0 71996 00:16:17.986 06:46:31 -- common/autotest_common.sh@941 -- # uname 00:16:17.986 06:46:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:17.986 06:46:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71996 00:16:17.986 killing process with pid 71996 00:16:17.986 Received shutdown signal, test time was about 2.000000 seconds 00:16:17.986 00:16:17.986 Latency(us) 00:16:17.986 [2024-12-14T06:46:31.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:17.986 [2024-12-14T06:46:31.978Z] =================================================================================================================== 00:16:17.986 [2024-12-14T06:46:31.978Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:17.986 06:46:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:17.986 06:46:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:17.986 06:46:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71996' 00:16:17.986 06:46:31 -- common/autotest_common.sh@955 -- # kill 71996 00:16:17.986 06:46:31 -- common/autotest_common.sh@960 -- # wait 71996 00:16:18.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:18.244 06:46:32 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:16:18.244 06:46:32 -- host/digest.sh@54 -- # local rw bs qd 00:16:18.244 06:46:32 -- host/digest.sh@56 -- # rw=randwrite 00:16:18.244 06:46:32 -- host/digest.sh@56 -- # bs=4096 00:16:18.244 06:46:32 -- host/digest.sh@56 -- # qd=128 00:16:18.244 06:46:32 -- host/digest.sh@58 -- # bperfpid=72057 00:16:18.244 06:46:32 -- host/digest.sh@60 -- # waitforlisten 72057 /var/tmp/bperf.sock 00:16:18.245 06:46:32 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:16:18.245 06:46:32 -- common/autotest_common.sh@829 -- # '[' -z 72057 ']' 00:16:18.245 06:46:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:18.245 06:46:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:18.245 06:46:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:18.245 06:46:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:18.245 06:46:32 -- common/autotest_common.sh@10 -- # set +x 00:16:18.245 [2024-12-14 06:46:32.154817] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:18.245 [2024-12-14 06:46:32.155216] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72057 ] 00:16:18.503 [2024-12-14 06:46:32.286886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.503 [2024-12-14 06:46:32.341004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.437 06:46:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:19.437 06:46:33 -- common/autotest_common.sh@862 -- # return 0 00:16:19.437 06:46:33 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:19.437 06:46:33 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:19.437 06:46:33 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:19.437 06:46:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.437 06:46:33 -- common/autotest_common.sh@10 -- # set +x 00:16:19.437 06:46:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.437 06:46:33 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:19.437 06:46:33 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:19.696 nvme0n1 00:16:19.696 06:46:33 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:16:19.696 06:46:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.696 06:46:33 -- common/autotest_common.sh@10 -- # set +x 00:16:19.696 06:46:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.696 06:46:33 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:19.696 06:46:33 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:19.955 Running I/O for 2 seconds... 00:16:19.955 [2024-12-14 06:46:33.778101] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190ddc00 00:16:19.955 [2024-12-14 06:46:33.779504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:19.955 [2024-12-14 06:46:33.779697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:19.955 [2024-12-14 06:46:33.793072] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190fef90 00:16:19.955 [2024-12-14 06:46:33.794393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:19.955 [2024-12-14 06:46:33.794428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.955 [2024-12-14 06:46:33.807601] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190ff3c8 00:16:19.955 [2024-12-14 06:46:33.808886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:19.955 [2024-12-14 06:46:33.809083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:19.955 [2024-12-14 06:46:33.822054] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190feb58 00:16:19.955 [2024-12-14 06:46:33.823428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:19.955 [2024-12-14 06:46:33.823622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:19.955 [2024-12-14 06:46:33.836852] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190fe720 00:16:19.955 [2024-12-14 06:46:33.838326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:19.955 [2024-12-14 06:46:33.838523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:19.955 [2024-12-14 06:46:33.851889] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190fe2e8 00:16:19.955 [2024-12-14 06:46:33.853329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:19.955 [2024-12-14 06:46:33.853537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:19.955 [2024-12-14 06:46:33.866759] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190fdeb0 00:16:19.955 [2024-12-14 06:46:33.868323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:19.955 [2024-12-14 06:46:33.868549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:19.955 [2024-12-14 06:46:33.884120] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190fda78 00:16:19.955 [2024-12-14 06:46:33.885635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:19.955 [2024-12-14 06:46:33.885846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:19.955 [2024-12-14 06:46:33.900538] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190fd640 00:16:19.955 [2024-12-14 06:46:33.902024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:19.955 [2024-12-14 06:46:33.902237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:19.955 [2024-12-14 06:46:33.915543] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190fd208 00:16:19.955 [2024-12-14 06:46:33.917004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:19.955 [2024-12-14 06:46:33.917201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:19.955 [2024-12-14 06:46:33.930840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190fcdd0 00:16:19.955 [2024-12-14 06:46:33.932295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:19.955 [2024-12-14 06:46:33.932488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:20.214 [2024-12-14 06:46:33.946260] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190fc998 00:16:20.215 [2024-12-14 06:46:33.947746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.215 [2024-12-14 06:46:33.947969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:20.215 [2024-12-14 06:46:33.961572] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190fc560 00:16:20.215 [2024-12-14 06:46:33.962996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.215 [2024-12-14 06:46:33.963029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:20.215 [2024-12-14 06:46:33.976963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190fc128 00:16:20.215 [2024-12-14 06:46:33.978328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.215 [2024-12-14 06:46:33.978392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:20.215 [2024-12-14 06:46:33.992224] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190fbcf0 00:16:20.215 [2024-12-14 06:46:33.993420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.215 [2024-12-14 06:46:33.993454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:20.215 [2024-12-14 06:46:34.007910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190fb8b8 00:16:20.215 [2024-12-14 06:46:34.009400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.215 [2024-12-14 06:46:34.009428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:20.215 [2024-12-14 06:46:34.023626] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190fb480 00:16:20.215 [2024-12-14 06:46:34.024951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.215 [2024-12-14 06:46:34.025007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:20.215 [2024-12-14 06:46:34.038550] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190fb048 00:16:20.215 [2024-12-14 06:46:34.039981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.215 [2024-12-14 06:46:34.040041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:20.215 [2024-12-14 06:46:34.053252] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190fac10 00:16:20.215 [2024-12-14 06:46:34.054554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.215 [2024-12-14 06:46:34.054604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:20.215 [2024-12-14 06:46:34.067944] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190fa7d8 00:16:20.215 [2024-12-14 06:46:34.069227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.215 [2024-12-14 06:46:34.069255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:20.215 [2024-12-14 06:46:34.082492] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190fa3a0 00:16:20.215 [2024-12-14 06:46:34.083706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.215 [2024-12-14 06:46:34.083922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:20.215 [2024-12-14 06:46:34.097105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f9f68 00:16:20.215 [2024-12-14 06:46:34.098236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.215 [2024-12-14 06:46:34.098270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:20.215 [2024-12-14 06:46:34.111507] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f9b30 00:16:20.215 [2024-12-14 06:46:34.112746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.215 [2024-12-14 06:46:34.112773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:20.215 [2024-12-14 06:46:34.128145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f96f8 00:16:20.215 [2024-12-14 06:46:34.129374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.215 [2024-12-14 06:46:34.129409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:20.215 [2024-12-14 06:46:34.145402] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f92c0 00:16:20.215 [2024-12-14 06:46:34.146700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.215 [2024-12-14 06:46:34.146730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:20.215 [2024-12-14 06:46:34.162208] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f8e88 00:16:20.215 [2024-12-14 06:46:34.163376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.215 [2024-12-14 06:46:34.163548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:20.215 [2024-12-14 06:46:34.177987] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f8a50 00:16:20.215 [2024-12-14 06:46:34.179152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.215 [2024-12-14 06:46:34.179193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:20.215 [2024-12-14 06:46:34.193790] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f8618 00:16:20.215 [2024-12-14 06:46:34.194951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.215 [2024-12-14 06:46:34.194987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:20.474 [2024-12-14 06:46:34.209708] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f81e0 00:16:20.474 [2024-12-14 06:46:34.210829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.474 [2024-12-14 06:46:34.210930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:20.474 [2024-12-14 06:46:34.225544] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f7da8 00:16:20.474 [2024-12-14 06:46:34.226691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.474 [2024-12-14 06:46:34.226725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:20.474 [2024-12-14 06:46:34.241846] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f7970 00:16:20.474 [2024-12-14 06:46:34.242982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.474 [2024-12-14 06:46:34.243018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:20.474 [2024-12-14 06:46:34.257707] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f7538 00:16:20.474 [2024-12-14 06:46:34.258887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.474 [2024-12-14 06:46:34.258955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:20.474 [2024-12-14 06:46:34.273915] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f7100 00:16:20.474 [2024-12-14 06:46:34.275055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.474 [2024-12-14 06:46:34.275092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:20.474 [2024-12-14 06:46:34.289455] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f6cc8 00:16:20.474 [2024-12-14 06:46:34.290546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.474 [2024-12-14 06:46:34.290580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:20.474 [2024-12-14 06:46:34.304537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f6890 00:16:20.474 [2024-12-14 06:46:34.305556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.474 [2024-12-14 06:46:34.305604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:20.474 [2024-12-14 06:46:34.319519] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f6458 00:16:20.474 [2024-12-14 06:46:34.320695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.474 [2024-12-14 06:46:34.320723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:20.474 [2024-12-14 06:46:34.334559] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f6020 00:16:20.474 [2024-12-14 06:46:34.335765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.475 [2024-12-14 06:46:34.335793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:20.475 [2024-12-14 06:46:34.349380] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f5be8 00:16:20.475 [2024-12-14 06:46:34.350580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.475 [2024-12-14 06:46:34.350613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:20.475 [2024-12-14 06:46:34.364227] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f57b0 00:16:20.475 [2024-12-14 06:46:34.365218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.475 [2024-12-14 06:46:34.365264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:20.475 [2024-12-14 06:46:34.378810] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f5378 00:16:20.475 [2024-12-14 06:46:34.379830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.475 [2024-12-14 06:46:34.380021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:20.475 [2024-12-14 06:46:34.393883] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f4f40 00:16:20.475 [2024-12-14 06:46:34.395187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.475 [2024-12-14 06:46:34.395458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:20.475 [2024-12-14 06:46:34.410850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f4b08 00:16:20.475 [2024-12-14 06:46:34.412038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.475 [2024-12-14 06:46:34.412224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:20.475 [2024-12-14 06:46:34.427102] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f46d0 00:16:20.475 [2024-12-14 06:46:34.428244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.475 [2024-12-14 06:46:34.428455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:20.475 [2024-12-14 06:46:34.442183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f4298 00:16:20.475 [2024-12-14 06:46:34.443392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.475 [2024-12-14 06:46:34.443597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:20.475 [2024-12-14 06:46:34.457354] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f3e60 00:16:20.475 [2024-12-14 06:46:34.458519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.475 [2024-12-14 06:46:34.458740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:20.733 [2024-12-14 06:46:34.474160] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f3a28 00:16:20.733 [2024-12-14 06:46:34.475344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.733 [2024-12-14 06:46:34.475552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:20.733 [2024-12-14 06:46:34.489856] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f35f0 00:16:20.733 [2024-12-14 06:46:34.491076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.733 [2024-12-14 06:46:34.491269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:20.733 [2024-12-14 06:46:34.505426] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f31b8 00:16:20.733 [2024-12-14 06:46:34.506489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.733 [2024-12-14 06:46:34.506695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:20.733 [2024-12-14 06:46:34.520489] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f2d80 00:16:20.733 [2024-12-14 06:46:34.521551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.733 [2024-12-14 06:46:34.521720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:20.733 [2024-12-14 06:46:34.535359] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f2948 00:16:20.733 [2024-12-14 06:46:34.536381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.733 [2024-12-14 06:46:34.536411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:20.733 [2024-12-14 06:46:34.549854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f2510 00:16:20.733 [2024-12-14 06:46:34.550728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.733 [2024-12-14 06:46:34.550763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:20.733 [2024-12-14 06:46:34.564379] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f20d8 00:16:20.733 [2024-12-14 06:46:34.565223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.733 [2024-12-14 06:46:34.565256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:20.733 [2024-12-14 06:46:34.578852] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f1ca0 00:16:20.733 [2024-12-14 06:46:34.579767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.733 [2024-12-14 06:46:34.579808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:20.733 [2024-12-14 06:46:34.593481] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f1868 00:16:20.733 [2024-12-14 06:46:34.594279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.733 [2024-12-14 06:46:34.594329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:20.733 [2024-12-14 06:46:34.607972] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f1430 00:16:20.733 [2024-12-14 06:46:34.608805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.733 [2024-12-14 06:46:34.608837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:20.733 [2024-12-14 06:46:34.623004] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f0ff8 00:16:20.733 [2024-12-14 06:46:34.623883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.733 [2024-12-14 06:46:34.624076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:20.733 [2024-12-14 06:46:34.637819] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f0bc0 00:16:20.733 [2024-12-14 06:46:34.638813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.733 [2024-12-14 06:46:34.639041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:20.733 [2024-12-14 06:46:34.652660] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f0788 00:16:20.733 [2024-12-14 06:46:34.653642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.733 [2024-12-14 06:46:34.653849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:20.733 [2024-12-14 06:46:34.667415] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190f0350 00:16:20.733 [2024-12-14 06:46:34.668344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.733 [2024-12-14 06:46:34.668580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:20.733 [2024-12-14 06:46:34.682091] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190eff18 00:16:20.733 [2024-12-14 06:46:34.683093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.733 [2024-12-14 06:46:34.683378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:20.733 [2024-12-14 06:46:34.696925] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190efae0 00:16:20.733 [2024-12-14 06:46:34.697886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.733 [2024-12-14 06:46:34.698153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:20.733 [2024-12-14 06:46:34.712870] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190ef6a8 00:16:20.733 [2024-12-14 06:46:34.713881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.733 [2024-12-14 06:46:34.714130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:20.992 [2024-12-14 06:46:34.729151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190ef270 00:16:20.992 [2024-12-14 06:46:34.730162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.992 [2024-12-14 06:46:34.730383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:20.992 [2024-12-14 06:46:34.744136] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190eee38 00:16:20.992 [2024-12-14 06:46:34.745057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.992 [2024-12-14 06:46:34.745294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:20.992 [2024-12-14 06:46:34.759821] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190eea00 00:16:20.992 [2024-12-14 06:46:34.760767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.992 [2024-12-14 06:46:34.760993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:20.992 [2024-12-14 06:46:34.774715] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190ee5c8 00:16:20.992 [2024-12-14 06:46:34.775486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.992 [2024-12-14 06:46:34.775625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:20.992 [2024-12-14 06:46:34.789531] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190ee190 00:16:20.992 [2024-12-14 06:46:34.790167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.992 [2024-12-14 06:46:34.790189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:20.992 [2024-12-14 06:46:34.804266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190edd58 00:16:20.992 [2024-12-14 06:46:34.804925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.992 [2024-12-14 06:46:34.804975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:20.992 [2024-12-14 06:46:34.818594] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190ed920 00:16:20.992 [2024-12-14 06:46:34.819316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.992 [2024-12-14 06:46:34.819353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:20.992 [2024-12-14 06:46:34.833259] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190ed4e8 00:16:20.992 [2024-12-14 06:46:34.833886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.992 [2024-12-14 06:46:34.833930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:20.992 [2024-12-14 06:46:34.847832] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190ed0b0 00:16:20.992 [2024-12-14 06:46:34.848761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.992 [2024-12-14 06:46:34.848790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:20.992 [2024-12-14 06:46:34.862548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190ecc78 00:16:20.992 [2024-12-14 06:46:34.863245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.992 [2024-12-14 06:46:34.863479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:20.992 [2024-12-14 06:46:34.877468] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190ec840 00:16:20.992 [2024-12-14 06:46:34.878122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.992 [2024-12-14 06:46:34.878331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:20.992 [2024-12-14 06:46:34.893565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190ec408 00:16:20.992 [2024-12-14 06:46:34.894219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.992 [2024-12-14 06:46:34.894261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:20.992 [2024-12-14 06:46:34.910128] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190ebfd0 00:16:20.993 [2024-12-14 06:46:34.910795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.993 [2024-12-14 06:46:34.910834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:20.993 [2024-12-14 06:46:34.925684] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190ebb98 00:16:20.993 [2024-12-14 06:46:34.926367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.993 [2024-12-14 06:46:34.926404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:20.993 [2024-12-14 06:46:34.940461] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190eb760 00:16:20.993 [2024-12-14 06:46:34.941040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.993 [2024-12-14 06:46:34.941128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:20.993 [2024-12-14 06:46:34.955076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190eb328 00:16:20.993 [2024-12-14 06:46:34.955795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.993 [2024-12-14 06:46:34.955840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:20.993 [2024-12-14 06:46:34.969911] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190eaef0 00:16:20.993 [2024-12-14 06:46:34.970502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:20.993 [2024-12-14 06:46:34.970540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:21.250 [2024-12-14 06:46:34.985179] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190eaab8 00:16:21.250 [2024-12-14 06:46:34.985784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.250 [2024-12-14 06:46:34.985836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:21.250 [2024-12-14 06:46:35.000100] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190ea680 00:16:21.250 [2024-12-14 06:46:35.000644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.250 [2024-12-14 06:46:35.000682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:21.250 [2024-12-14 06:46:35.015745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190ea248 00:16:21.250 [2024-12-14 06:46:35.016445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.250 [2024-12-14 06:46:35.016504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:21.250 [2024-12-14 06:46:35.030662] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e9e10 00:16:21.250 [2024-12-14 06:46:35.031207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.250 [2024-12-14 06:46:35.031256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:21.250 [2024-12-14 06:46:35.045204] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e99d8 00:16:21.250 [2024-12-14 06:46:35.045687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.250 [2024-12-14 06:46:35.045745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:21.250 [2024-12-14 06:46:35.059768] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e95a0 00:16:21.250 [2024-12-14 06:46:35.060437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.250 [2024-12-14 06:46:35.060467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:21.250 [2024-12-14 06:46:35.074475] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e9168 00:16:21.250 [2024-12-14 06:46:35.074960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.250 [2024-12-14 06:46:35.075003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:21.250 [2024-12-14 06:46:35.089013] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e8d30 00:16:21.250 [2024-12-14 06:46:35.089516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.250 [2024-12-14 06:46:35.089555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:21.250 [2024-12-14 06:46:35.104759] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e88f8 00:16:21.250 [2024-12-14 06:46:35.105270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.250 [2024-12-14 06:46:35.105309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:21.250 [2024-12-14 06:46:35.120781] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e84c0 00:16:21.250 [2024-12-14 06:46:35.121350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.250 [2024-12-14 06:46:35.121385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:21.250 [2024-12-14 06:46:35.135634] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e8088 00:16:21.250 [2024-12-14 06:46:35.136265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.250 [2024-12-14 06:46:35.136296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:21.250 [2024-12-14 06:46:35.150335] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e7c50 00:16:21.250 [2024-12-14 06:46:35.150985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.250 [2024-12-14 06:46:35.151016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:21.250 [2024-12-14 06:46:35.165064] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e7818 00:16:21.250 [2024-12-14 06:46:35.165676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.250 [2024-12-14 06:46:35.165706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:21.250 [2024-12-14 06:46:35.179828] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e73e0 00:16:21.250 [2024-12-14 06:46:35.180425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.250 [2024-12-14 06:46:35.180455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:21.250 [2024-12-14 06:46:35.194631] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e6fa8 00:16:21.250 [2024-12-14 06:46:35.195094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.250 [2024-12-14 06:46:35.195122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:21.250 [2024-12-14 06:46:35.209207] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e6b70 00:16:21.250 [2024-12-14 06:46:35.209583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.250 [2024-12-14 06:46:35.209610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:21.250 [2024-12-14 06:46:35.223752] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e6738 00:16:21.250 [2024-12-14 06:46:35.224322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.251 [2024-12-14 06:46:35.224352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:21.251 [2024-12-14 06:46:35.238538] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e6300 00:16:21.251 [2024-12-14 06:46:35.238958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.251 [2024-12-14 06:46:35.239000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:21.508 [2024-12-14 06:46:35.253785] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e5ec8 00:16:21.508 [2024-12-14 06:46:35.254202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.508 [2024-12-14 06:46:35.254235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:21.508 [2024-12-14 06:46:35.270699] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e5a90 00:16:21.508 [2024-12-14 06:46:35.271094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.508 [2024-12-14 06:46:35.271123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:21.508 [2024-12-14 06:46:35.286396] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e5658 00:16:21.508 [2024-12-14 06:46:35.286719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.508 [2024-12-14 06:46:35.286745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:21.508 [2024-12-14 06:46:35.302964] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e5220 00:16:21.508 [2024-12-14 06:46:35.303327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.508 [2024-12-14 06:46:35.303484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:21.508 [2024-12-14 06:46:35.319804] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e4de8 00:16:21.508 [2024-12-14 06:46:35.320390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.508 [2024-12-14 06:46:35.320562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:21.508 [2024-12-14 06:46:35.336130] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e49b0 00:16:21.508 [2024-12-14 06:46:35.336631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.508 [2024-12-14 06:46:35.336812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:21.508 [2024-12-14 06:46:35.353142] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e4578 00:16:21.508 [2024-12-14 06:46:35.353640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.508 [2024-12-14 06:46:35.353816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:21.508 [2024-12-14 06:46:35.369787] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e4140 00:16:21.508 [2024-12-14 06:46:35.370290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.508 [2024-12-14 06:46:35.370467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:21.508 [2024-12-14 06:46:35.385754] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e3d08 00:16:21.508 [2024-12-14 06:46:35.386232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.508 [2024-12-14 06:46:35.386409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:21.508 [2024-12-14 06:46:35.401452] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e38d0 00:16:21.508 [2024-12-14 06:46:35.401921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.508 [2024-12-14 06:46:35.402108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:21.508 [2024-12-14 06:46:35.417467] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e3498 00:16:21.508 [2024-12-14 06:46:35.417945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.508 [2024-12-14 06:46:35.418153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:21.509 [2024-12-14 06:46:35.434612] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e3060 00:16:21.509 [2024-12-14 06:46:35.435085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.509 [2024-12-14 06:46:35.435116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:21.509 [2024-12-14 06:46:35.451293] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e2c28 00:16:21.509 [2024-12-14 06:46:35.451583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.509 [2024-12-14 06:46:35.451729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:21.509 [2024-12-14 06:46:35.466346] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e27f0 00:16:21.509 [2024-12-14 06:46:35.466569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.509 [2024-12-14 06:46:35.466595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:21.509 [2024-12-14 06:46:35.480822] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e23b8 00:16:21.509 [2024-12-14 06:46:35.481070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.509 [2024-12-14 06:46:35.481116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:21.509 [2024-12-14 06:46:35.495485] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e1f80 00:16:21.509 [2024-12-14 06:46:35.495901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.509 [2024-12-14 06:46:35.495926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:21.767 [2024-12-14 06:46:35.511052] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e1b48 00:16:21.768 [2024-12-14 06:46:35.511489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.768 [2024-12-14 06:46:35.511705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:21.768 [2024-12-14 06:46:35.526874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e1710 00:16:21.768 [2024-12-14 06:46:35.527281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.768 [2024-12-14 06:46:35.527483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:21.768 [2024-12-14 06:46:35.541982] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e12d8 00:16:21.768 [2024-12-14 06:46:35.542346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.768 [2024-12-14 06:46:35.542508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:21.768 [2024-12-14 06:46:35.556808] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e0ea0 00:16:21.768 [2024-12-14 06:46:35.557189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.768 [2024-12-14 06:46:35.557378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:21.768 [2024-12-14 06:46:35.571629] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e0a68 00:16:21.768 [2024-12-14 06:46:35.572035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.768 [2024-12-14 06:46:35.572238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:21.768 [2024-12-14 06:46:35.586834] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e0630 00:16:21.768 [2024-12-14 06:46:35.587221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.768 [2024-12-14 06:46:35.587503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:21.768 [2024-12-14 06:46:35.603267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190e01f8 00:16:21.768 [2024-12-14 06:46:35.603666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.768 [2024-12-14 06:46:35.603953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:21.768 [2024-12-14 06:46:35.618937] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190dfdc0 00:16:21.768 [2024-12-14 06:46:35.619236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.768 [2024-12-14 06:46:35.619262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:21.768 [2024-12-14 06:46:35.633942] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190df988 00:16:21.768 [2024-12-14 06:46:35.634251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.768 [2024-12-14 06:46:35.634273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:21.768 [2024-12-14 06:46:35.648575] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190df550 00:16:21.768 [2024-12-14 06:46:35.648701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.768 [2024-12-14 06:46:35.648722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:21.768 [2024-12-14 06:46:35.662975] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190df118 00:16:21.768 [2024-12-14 06:46:35.663091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.768 [2024-12-14 06:46:35.663114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:21.768 [2024-12-14 06:46:35.677279] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190dece0 00:16:21.768 [2024-12-14 06:46:35.677380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.768 [2024-12-14 06:46:35.677401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:21.768 [2024-12-14 06:46:35.691856] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190de8a8 00:16:21.768 [2024-12-14 06:46:35.692130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.768 [2024-12-14 06:46:35.692152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:21.768 [2024-12-14 06:46:35.706446] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190de038 00:16:21.768 [2024-12-14 06:46:35.706529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.768 [2024-12-14 06:46:35.706550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:21.768 [2024-12-14 06:46:35.728487] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190de038 00:16:21.768 [2024-12-14 06:46:35.729839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.768 [2024-12-14 06:46:35.729874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:21.768 [2024-12-14 06:46:35.743087] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190de470 00:16:21.768 [2024-12-14 06:46:35.744523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:21.768 [2024-12-14 06:46:35.744552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.027 [2024-12-14 06:46:35.758208] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192ddc0) with pdu=0x2000190de8a8 00:16:22.027 [2024-12-14 06:46:35.759876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:22.027 [2024-12-14 06:46:35.760070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:22.027 00:16:22.027 Latency(us) 00:16:22.027 [2024-12-14T06:46:36.019Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.027 [2024-12-14T06:46:36.019Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:22.027 nvme0n1 : 2.01 16563.18 64.70 0.00 0.00 7722.51 6851.49 21924.77 00:16:22.027 [2024-12-14T06:46:36.019Z] =================================================================================================================== 00:16:22.027 [2024-12-14T06:46:36.019Z] Total : 16563.18 64.70 0.00 0.00 7722.51 6851.49 21924.77 00:16:22.027 0 00:16:22.027 06:46:35 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:22.027 06:46:35 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:22.027 06:46:35 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:22.027 06:46:35 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:22.027 | .driver_specific 00:16:22.027 | .nvme_error 00:16:22.027 | .status_code 00:16:22.027 | .command_transient_transport_error' 00:16:22.286 06:46:36 -- host/digest.sh@71 -- # (( 130 > 0 )) 00:16:22.286 06:46:36 -- host/digest.sh@73 -- # killprocess 72057 00:16:22.286 06:46:36 -- common/autotest_common.sh@936 -- # '[' -z 72057 ']' 00:16:22.286 06:46:36 -- common/autotest_common.sh@940 -- # kill -0 72057 00:16:22.286 06:46:36 -- common/autotest_common.sh@941 -- # uname 00:16:22.286 06:46:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:22.286 06:46:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72057 00:16:22.286 killing process with pid 72057 00:16:22.286 Received shutdown signal, test time was about 2.000000 seconds 00:16:22.286 00:16:22.286 Latency(us) 00:16:22.286 [2024-12-14T06:46:36.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.286 [2024-12-14T06:46:36.278Z] =================================================================================================================== 00:16:22.286 [2024-12-14T06:46:36.278Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:22.286 06:46:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:22.286 06:46:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:22.286 06:46:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72057' 00:16:22.286 06:46:36 -- common/autotest_common.sh@955 -- # kill 72057 00:16:22.286 06:46:36 -- common/autotest_common.sh@960 -- # wait 72057 00:16:22.545 06:46:36 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:16:22.545 06:46:36 -- host/digest.sh@54 -- # local rw bs qd 00:16:22.545 06:46:36 -- host/digest.sh@56 -- # rw=randwrite 00:16:22.545 06:46:36 -- host/digest.sh@56 -- # bs=131072 00:16:22.545 06:46:36 -- host/digest.sh@56 -- # qd=16 00:16:22.545 06:46:36 -- host/digest.sh@58 -- # bperfpid=72112 00:16:22.545 06:46:36 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:16:22.545 06:46:36 -- host/digest.sh@60 -- # waitforlisten 72112 /var/tmp/bperf.sock 00:16:22.545 06:46:36 -- common/autotest_common.sh@829 -- # '[' -z 72112 ']' 00:16:22.545 06:46:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:22.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:22.545 06:46:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:22.545 06:46:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:22.545 06:46:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:22.545 06:46:36 -- common/autotest_common.sh@10 -- # set +x 00:16:22.545 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:22.545 Zero copy mechanism will not be used. 00:16:22.545 [2024-12-14 06:46:36.360758] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:22.545 [2024-12-14 06:46:36.360856] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72112 ] 00:16:22.545 [2024-12-14 06:46:36.496515] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.804 [2024-12-14 06:46:36.556390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.371 06:46:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:23.371 06:46:37 -- common/autotest_common.sh@862 -- # return 0 00:16:23.371 06:46:37 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:23.371 06:46:37 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:16:23.629 06:46:37 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:16:23.629 06:46:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.629 06:46:37 -- common/autotest_common.sh@10 -- # set +x 00:16:23.629 06:46:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.629 06:46:37 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:23.629 06:46:37 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:23.888 nvme0n1 00:16:23.888 06:46:37 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:16:23.888 06:46:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.888 06:46:37 -- common/autotest_common.sh@10 -- # set +x 00:16:23.888 06:46:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.888 06:46:37 -- host/digest.sh@69 -- # bperf_py perform_tests 00:16:23.888 06:46:37 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:24.147 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:24.147 Zero copy mechanism will not be used. 00:16:24.147 Running I/O for 2 seconds... 00:16:24.147 [2024-12-14 06:46:37.984542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.147 [2024-12-14 06:46:37.984851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.147 [2024-12-14 06:46:37.984882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.147 [2024-12-14 06:46:37.989713] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.147 [2024-12-14 06:46:37.990018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.147 [2024-12-14 06:46:37.990047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.147 [2024-12-14 06:46:37.995017] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.147 [2024-12-14 06:46:37.995367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.147 [2024-12-14 06:46:37.995442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.147 [2024-12-14 06:46:38.000037] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.147 [2024-12-14 06:46:38.000327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.148 [2024-12-14 06:46:38.000354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.148 [2024-12-14 06:46:38.005023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.148 [2024-12-14 06:46:38.005332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.148 [2024-12-14 06:46:38.005369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.148 [2024-12-14 06:46:38.010119] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.148 [2024-12-14 06:46:38.010411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.148 [2024-12-14 06:46:38.010440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.148 [2024-12-14 06:46:38.015068] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.148 [2024-12-14 06:46:38.015407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.148 [2024-12-14 06:46:38.015435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.148 [2024-12-14 06:46:38.020209] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.148 [2024-12-14 06:46:38.020499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.148 [2024-12-14 06:46:38.020526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.148 [2024-12-14 06:46:38.025502] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.148 [2024-12-14 06:46:38.025796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.148 [2024-12-14 06:46:38.025824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.148 [2024-12-14 06:46:38.030487] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.148 [2024-12-14 06:46:38.030793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.148 [2024-12-14 06:46:38.030821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.148 [2024-12-14 06:46:38.035554] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.148 [2024-12-14 06:46:38.036091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.148 [2024-12-14 06:46:38.036123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.148 [2024-12-14 06:46:38.041401] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.148 [2024-12-14 06:46:38.041700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.148 [2024-12-14 06:46:38.041729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.148 [2024-12-14 06:46:38.046644] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.148 [2024-12-14 06:46:38.046999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.148 [2024-12-14 06:46:38.047031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.148 [2024-12-14 06:46:38.052011] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.148 [2024-12-14 06:46:38.052376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.148 [2024-12-14 06:46:38.052406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.148 [2024-12-14 06:46:38.057450] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.148 [2024-12-14 06:46:38.057770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.148 [2024-12-14 06:46:38.057814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.148 [2024-12-14 06:46:38.062613] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.148 [2024-12-14 06:46:38.062947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.148 [2024-12-14 06:46:38.062977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.148 [2024-12-14 06:46:38.067591] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.148 [2024-12-14 06:46:38.068076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.148 [2024-12-14 06:46:38.068123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.148 [2024-12-14 06:46:38.072928] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.148 [2024-12-14 06:46:38.073287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.148 [2024-12-14 06:46:38.073328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.148 [2024-12-14 06:46:38.078544] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.148 [2024-12-14 06:46:38.078940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.148 [2024-12-14 06:46:38.078969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.148 [2024-12-14 06:46:38.084477] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.148 [2024-12-14 06:46:38.084814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.148 [2024-12-14 06:46:38.084848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.148 [2024-12-14 06:46:38.090007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.148 [2024-12-14 06:46:38.090302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.148 [2024-12-14 06:46:38.090330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.148 [2024-12-14 06:46:38.095051] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.148 [2024-12-14 06:46:38.095387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.148 [2024-12-14 06:46:38.095414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.148 [2024-12-14 06:46:38.100122] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.148 [2024-12-14 06:46:38.100416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.148 [2024-12-14 06:46:38.100444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.148 [2024-12-14 06:46:38.105049] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.148 [2024-12-14 06:46:38.105360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.148 [2024-12-14 06:46:38.105386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.148 [2024-12-14 06:46:38.110280] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.148 [2024-12-14 06:46:38.110609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.148 [2024-12-14 06:46:38.110638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.148 [2024-12-14 06:46:38.115758] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.148 [2024-12-14 06:46:38.116265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.148 [2024-12-14 06:46:38.116328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.148 [2024-12-14 06:46:38.120975] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.148 [2024-12-14 06:46:38.121307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.148 [2024-12-14 06:46:38.121335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.148 [2024-12-14 06:46:38.125991] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.148 [2024-12-14 06:46:38.126279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.148 [2024-12-14 06:46:38.126306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.148 [2024-12-14 06:46:38.131132] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.148 [2024-12-14 06:46:38.131475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.148 [2024-12-14 06:46:38.131503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.148 [2024-12-14 06:46:38.136451] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.148 [2024-12-14 06:46:38.136762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.148 [2024-12-14 06:46:38.136790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.409 [2024-12-14 06:46:38.141864] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.409 [2024-12-14 06:46:38.142235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.409 [2024-12-14 06:46:38.142278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.409 [2024-12-14 06:46:38.146977] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.409 [2024-12-14 06:46:38.147324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.409 [2024-12-14 06:46:38.147380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.409 [2024-12-14 06:46:38.151994] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.409 [2024-12-14 06:46:38.152282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.409 [2024-12-14 06:46:38.152309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.409 [2024-12-14 06:46:38.156920] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.409 [2024-12-14 06:46:38.157264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.409 [2024-12-14 06:46:38.157304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.409 [2024-12-14 06:46:38.161964] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.409 [2024-12-14 06:46:38.162254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.409 [2024-12-14 06:46:38.162281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.409 [2024-12-14 06:46:38.166841] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.409 [2024-12-14 06:46:38.167240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.409 [2024-12-14 06:46:38.167275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.409 [2024-12-14 06:46:38.171862] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.409 [2024-12-14 06:46:38.172341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.409 [2024-12-14 06:46:38.172374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.409 [2024-12-14 06:46:38.177055] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.409 [2024-12-14 06:46:38.177366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.409 [2024-12-14 06:46:38.177393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.409 [2024-12-14 06:46:38.182041] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.409 [2024-12-14 06:46:38.182352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.409 [2024-12-14 06:46:38.182380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.409 [2024-12-14 06:46:38.187141] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.409 [2024-12-14 06:46:38.187498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.409 [2024-12-14 06:46:38.187524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.409 [2024-12-14 06:46:38.192156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.409 [2024-12-14 06:46:38.192443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.409 [2024-12-14 06:46:38.192471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.409 [2024-12-14 06:46:38.197114] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.409 [2024-12-14 06:46:38.197420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.409 [2024-12-14 06:46:38.197448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.409 [2024-12-14 06:46:38.202087] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.409 [2024-12-14 06:46:38.202375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.409 [2024-12-14 06:46:38.202403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.409 [2024-12-14 06:46:38.207100] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.409 [2024-12-14 06:46:38.207461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.409 [2024-12-14 06:46:38.207488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.409 [2024-12-14 06:46:38.212092] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.409 [2024-12-14 06:46:38.212382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.409 [2024-12-14 06:46:38.212409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.409 [2024-12-14 06:46:38.217076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.409 [2024-12-14 06:46:38.217384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.409 [2024-12-14 06:46:38.217413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.409 [2024-12-14 06:46:38.222048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.409 [2024-12-14 06:46:38.222347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.409 [2024-12-14 06:46:38.222374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.409 [2024-12-14 06:46:38.226932] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.409 [2024-12-14 06:46:38.227290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.409 [2024-12-14 06:46:38.227332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.409 [2024-12-14 06:46:38.231988] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.409 [2024-12-14 06:46:38.232285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.409 [2024-12-14 06:46:38.232313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.409 [2024-12-14 06:46:38.236941] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.409 [2024-12-14 06:46:38.237289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.409 [2024-12-14 06:46:38.237320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.409 [2024-12-14 06:46:38.242381] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.409 [2024-12-14 06:46:38.242861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.409 [2024-12-14 06:46:38.242908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.409 [2024-12-14 06:46:38.248143] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.409 [2024-12-14 06:46:38.248490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.409 [2024-12-14 06:46:38.248518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.409 [2024-12-14 06:46:38.253592] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.409 [2024-12-14 06:46:38.254100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.409 [2024-12-14 06:46:38.254135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.409 [2024-12-14 06:46:38.259106] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.409 [2024-12-14 06:46:38.259461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.409 [2024-12-14 06:46:38.259488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.409 [2024-12-14 06:46:38.264447] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.409 [2024-12-14 06:46:38.264735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.409 [2024-12-14 06:46:38.264763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.410 [2024-12-14 06:46:38.269476] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.410 [2024-12-14 06:46:38.269944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.410 [2024-12-14 06:46:38.270003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.410 [2024-12-14 06:46:38.274658] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.410 [2024-12-14 06:46:38.274995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.410 [2024-12-14 06:46:38.275025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.410 [2024-12-14 06:46:38.279743] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.410 [2024-12-14 06:46:38.280070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.410 [2024-12-14 06:46:38.280098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.410 [2024-12-14 06:46:38.284755] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.410 [2024-12-14 06:46:38.285112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.410 [2024-12-14 06:46:38.285146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.410 [2024-12-14 06:46:38.289978] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.410 [2024-12-14 06:46:38.290276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.410 [2024-12-14 06:46:38.290303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.410 [2024-12-14 06:46:38.294957] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.410 [2024-12-14 06:46:38.295302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.410 [2024-12-14 06:46:38.295345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.410 [2024-12-14 06:46:38.299994] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.410 [2024-12-14 06:46:38.300281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.410 [2024-12-14 06:46:38.300307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.410 [2024-12-14 06:46:38.304943] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.410 [2024-12-14 06:46:38.305281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.410 [2024-12-14 06:46:38.305321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.410 [2024-12-14 06:46:38.310000] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.410 [2024-12-14 06:46:38.310308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.410 [2024-12-14 06:46:38.310335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.410 [2024-12-14 06:46:38.314948] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.410 [2024-12-14 06:46:38.315271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.410 [2024-12-14 06:46:38.315313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.410 [2024-12-14 06:46:38.319984] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.410 [2024-12-14 06:46:38.320272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.410 [2024-12-14 06:46:38.320299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.410 [2024-12-14 06:46:38.324995] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.410 [2024-12-14 06:46:38.325305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.410 [2024-12-14 06:46:38.325331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.410 [2024-12-14 06:46:38.330005] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.410 [2024-12-14 06:46:38.330294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.410 [2024-12-14 06:46:38.330320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.410 [2024-12-14 06:46:38.335034] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.410 [2024-12-14 06:46:38.335369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.410 [2024-12-14 06:46:38.335397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.410 [2024-12-14 06:46:38.340109] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.410 [2024-12-14 06:46:38.340397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.410 [2024-12-14 06:46:38.340425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.410 [2024-12-14 06:46:38.344954] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.410 [2024-12-14 06:46:38.345302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.410 [2024-12-14 06:46:38.345342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.410 [2024-12-14 06:46:38.349992] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.410 [2024-12-14 06:46:38.350289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.410 [2024-12-14 06:46:38.350316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.410 [2024-12-14 06:46:38.355016] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.410 [2024-12-14 06:46:38.355372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.410 [2024-12-14 06:46:38.355399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.410 [2024-12-14 06:46:38.359973] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.410 [2024-12-14 06:46:38.360261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.410 [2024-12-14 06:46:38.360287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.410 [2024-12-14 06:46:38.364989] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.410 [2024-12-14 06:46:38.365330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.410 [2024-12-14 06:46:38.365358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.410 [2024-12-14 06:46:38.370260] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.410 [2024-12-14 06:46:38.370570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.410 [2024-12-14 06:46:38.370597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.410 [2024-12-14 06:46:38.375395] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.410 [2024-12-14 06:46:38.375701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.410 [2024-12-14 06:46:38.375729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.410 [2024-12-14 06:46:38.380420] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.410 [2024-12-14 06:46:38.380710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.410 [2024-12-14 06:46:38.380737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.410 [2024-12-14 06:46:38.385484] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.410 [2024-12-14 06:46:38.385968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.410 [2024-12-14 06:46:38.386014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.410 [2024-12-14 06:46:38.390722] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.410 [2024-12-14 06:46:38.391097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.410 [2024-12-14 06:46:38.391134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.410 [2024-12-14 06:46:38.395896] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.410 [2024-12-14 06:46:38.396226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.410 [2024-12-14 06:46:38.396255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.670 [2024-12-14 06:46:38.401127] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.670 [2024-12-14 06:46:38.401430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.670 [2024-12-14 06:46:38.401457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.670 [2024-12-14 06:46:38.406258] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.670 [2024-12-14 06:46:38.406554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.670 [2024-12-14 06:46:38.406583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.670 [2024-12-14 06:46:38.411365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.670 [2024-12-14 06:46:38.411654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.670 [2024-12-14 06:46:38.411682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.670 [2024-12-14 06:46:38.416348] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.670 [2024-12-14 06:46:38.416634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.670 [2024-12-14 06:46:38.416661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.670 [2024-12-14 06:46:38.421367] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.670 [2024-12-14 06:46:38.421655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.670 [2024-12-14 06:46:38.421682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.670 [2024-12-14 06:46:38.426412] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.670 [2024-12-14 06:46:38.426701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.670 [2024-12-14 06:46:38.426728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.670 [2024-12-14 06:46:38.431459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.670 [2024-12-14 06:46:38.431745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.670 [2024-12-14 06:46:38.431773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.670 [2024-12-14 06:46:38.436417] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.670 [2024-12-14 06:46:38.436704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.670 [2024-12-14 06:46:38.436731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.670 [2024-12-14 06:46:38.441491] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.670 [2024-12-14 06:46:38.441777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.670 [2024-12-14 06:46:38.441804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.670 [2024-12-14 06:46:38.446473] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.670 [2024-12-14 06:46:38.446760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.671 [2024-12-14 06:46:38.446788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.671 [2024-12-14 06:46:38.451579] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.671 [2024-12-14 06:46:38.451868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.671 [2024-12-14 06:46:38.451904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.671 [2024-12-14 06:46:38.456549] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.671 [2024-12-14 06:46:38.457053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.671 [2024-12-14 06:46:38.457101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.671 [2024-12-14 06:46:38.461810] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.671 [2024-12-14 06:46:38.462111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.671 [2024-12-14 06:46:38.462138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.671 [2024-12-14 06:46:38.466745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.671 [2024-12-14 06:46:38.467140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.671 [2024-12-14 06:46:38.467175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.671 [2024-12-14 06:46:38.471919] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.671 [2024-12-14 06:46:38.472252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.671 [2024-12-14 06:46:38.472280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.671 [2024-12-14 06:46:38.476979] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.671 [2024-12-14 06:46:38.477341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.671 [2024-12-14 06:46:38.477383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.671 [2024-12-14 06:46:38.482494] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.671 [2024-12-14 06:46:38.482788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.671 [2024-12-14 06:46:38.482816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.671 [2024-12-14 06:46:38.488143] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.671 [2024-12-14 06:46:38.488479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.671 [2024-12-14 06:46:38.488507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.671 [2024-12-14 06:46:38.493877] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.671 [2024-12-14 06:46:38.494228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.671 [2024-12-14 06:46:38.494279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.671 [2024-12-14 06:46:38.499454] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.671 [2024-12-14 06:46:38.499938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.671 [2024-12-14 06:46:38.499987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.671 [2024-12-14 06:46:38.505347] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.671 [2024-12-14 06:46:38.505650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.671 [2024-12-14 06:46:38.505677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.671 [2024-12-14 06:46:38.510798] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.671 [2024-12-14 06:46:38.511184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.671 [2024-12-14 06:46:38.511219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.671 [2024-12-14 06:46:38.516451] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.671 [2024-12-14 06:46:38.516743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.671 [2024-12-14 06:46:38.516770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.671 [2024-12-14 06:46:38.522043] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.671 [2024-12-14 06:46:38.522418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.671 [2024-12-14 06:46:38.522445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.671 [2024-12-14 06:46:38.527810] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.671 [2024-12-14 06:46:38.528318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.671 [2024-12-14 06:46:38.528380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.671 [2024-12-14 06:46:38.533496] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.671 [2024-12-14 06:46:38.533780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.671 [2024-12-14 06:46:38.533807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.671 [2024-12-14 06:46:38.539103] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.671 [2024-12-14 06:46:38.539448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.671 [2024-12-14 06:46:38.539475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.671 [2024-12-14 06:46:38.544789] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.671 [2024-12-14 06:46:38.545153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.671 [2024-12-14 06:46:38.545189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.671 [2024-12-14 06:46:38.550368] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.671 [2024-12-14 06:46:38.550869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.671 [2024-12-14 06:46:38.550958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.671 [2024-12-14 06:46:38.556382] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.671 [2024-12-14 06:46:38.556668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.671 [2024-12-14 06:46:38.556696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.671 [2024-12-14 06:46:38.561807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.671 [2024-12-14 06:46:38.562295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.671 [2024-12-14 06:46:38.562373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.671 [2024-12-14 06:46:38.567703] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.671 [2024-12-14 06:46:38.568069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.671 [2024-12-14 06:46:38.568105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.671 [2024-12-14 06:46:38.573150] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.671 [2024-12-14 06:46:38.573491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.671 [2024-12-14 06:46:38.573518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.671 [2024-12-14 06:46:38.578586] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.671 [2024-12-14 06:46:38.578874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.671 [2024-12-14 06:46:38.578952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.671 [2024-12-14 06:46:38.587769] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.671 [2024-12-14 06:46:38.588172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.671 [2024-12-14 06:46:38.588202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.671 [2024-12-14 06:46:38.597402] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.671 [2024-12-14 06:46:38.597747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.671 [2024-12-14 06:46:38.597780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.671 [2024-12-14 06:46:38.602642] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.671 [2024-12-14 06:46:38.602993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.671 [2024-12-14 06:46:38.603025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.671 [2024-12-14 06:46:38.607770] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.672 [2024-12-14 06:46:38.608247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.672 [2024-12-14 06:46:38.608282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.672 [2024-12-14 06:46:38.612902] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.672 [2024-12-14 06:46:38.613187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.672 [2024-12-14 06:46:38.613215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.672 [2024-12-14 06:46:38.617815] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.672 [2024-12-14 06:46:38.618145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.672 [2024-12-14 06:46:38.618215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.672 [2024-12-14 06:46:38.622815] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.672 [2024-12-14 06:46:38.623197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.672 [2024-12-14 06:46:38.623233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.672 [2024-12-14 06:46:38.628355] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.672 [2024-12-14 06:46:38.628686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.672 [2024-12-14 06:46:38.628716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.672 [2024-12-14 06:46:38.633617] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.672 [2024-12-14 06:46:38.633919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.672 [2024-12-14 06:46:38.633947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.672 [2024-12-14 06:46:38.638519] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.672 [2024-12-14 06:46:38.638792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.672 [2024-12-14 06:46:38.638821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.672 [2024-12-14 06:46:38.643476] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.672 [2024-12-14 06:46:38.643933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.672 [2024-12-14 06:46:38.643979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.672 [2024-12-14 06:46:38.648620] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.672 [2024-12-14 06:46:38.648924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.672 [2024-12-14 06:46:38.648952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.672 [2024-12-14 06:46:38.653547] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.672 [2024-12-14 06:46:38.653836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.672 [2024-12-14 06:46:38.653863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.672 [2024-12-14 06:46:38.658725] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.672 [2024-12-14 06:46:38.659141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.672 [2024-12-14 06:46:38.659177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.932 [2024-12-14 06:46:38.664076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.932 [2024-12-14 06:46:38.664379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.932 [2024-12-14 06:46:38.664407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.932 [2024-12-14 06:46:38.669230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.932 [2024-12-14 06:46:38.669516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.932 [2024-12-14 06:46:38.669544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.932 [2024-12-14 06:46:38.674148] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.932 [2024-12-14 06:46:38.674433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.932 [2024-12-14 06:46:38.674461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.932 [2024-12-14 06:46:38.679157] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.932 [2024-12-14 06:46:38.679513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.932 [2024-12-14 06:46:38.679540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.932 [2024-12-14 06:46:38.684174] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.932 [2024-12-14 06:46:38.684471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.932 [2024-12-14 06:46:38.684499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.932 [2024-12-14 06:46:38.689210] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.932 [2024-12-14 06:46:38.689497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.932 [2024-12-14 06:46:38.689524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.932 [2024-12-14 06:46:38.694130] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.932 [2024-12-14 06:46:38.694417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.932 [2024-12-14 06:46:38.694444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.932 [2024-12-14 06:46:38.699011] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.932 [2024-12-14 06:46:38.699363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.932 [2024-12-14 06:46:38.699390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.932 [2024-12-14 06:46:38.703951] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.932 [2024-12-14 06:46:38.704236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.932 [2024-12-14 06:46:38.704263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.932 [2024-12-14 06:46:38.709142] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.932 [2024-12-14 06:46:38.709463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.932 [2024-12-14 06:46:38.709490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.932 [2024-12-14 06:46:38.714683] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.932 [2024-12-14 06:46:38.715043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.932 [2024-12-14 06:46:38.715080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.932 [2024-12-14 06:46:38.719902] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.932 [2024-12-14 06:46:38.720238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.932 [2024-12-14 06:46:38.720283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.932 [2024-12-14 06:46:38.725203] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.932 [2024-12-14 06:46:38.725510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.932 [2024-12-14 06:46:38.725538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.932 [2024-12-14 06:46:38.730575] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.932 [2024-12-14 06:46:38.730875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.932 [2024-12-14 06:46:38.730973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.932 [2024-12-14 06:46:38.735765] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.933 [2024-12-14 06:46:38.736141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.933 [2024-12-14 06:46:38.736177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.933 [2024-12-14 06:46:38.741043] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.933 [2024-12-14 06:46:38.741344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.933 [2024-12-14 06:46:38.741373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.933 [2024-12-14 06:46:38.746074] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.933 [2024-12-14 06:46:38.746366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.933 [2024-12-14 06:46:38.746394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.933 [2024-12-14 06:46:38.751033] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.933 [2024-12-14 06:46:38.751391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.933 [2024-12-14 06:46:38.751419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.933 [2024-12-14 06:46:38.756401] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.933 [2024-12-14 06:46:38.756717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.933 [2024-12-14 06:46:38.756745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.933 [2024-12-14 06:46:38.761762] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.933 [2024-12-14 06:46:38.762106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.933 [2024-12-14 06:46:38.762137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.933 [2024-12-14 06:46:38.767365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.933 [2024-12-14 06:46:38.767674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.933 [2024-12-14 06:46:38.767703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.933 [2024-12-14 06:46:38.772884] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.933 [2024-12-14 06:46:38.773356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.933 [2024-12-14 06:46:38.773391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.933 [2024-12-14 06:46:38.778597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.933 [2024-12-14 06:46:38.778955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.933 [2024-12-14 06:46:38.778986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.933 [2024-12-14 06:46:38.784075] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.933 [2024-12-14 06:46:38.784416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.933 [2024-12-14 06:46:38.784444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.933 [2024-12-14 06:46:38.789371] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.933 [2024-12-14 06:46:38.789673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.933 [2024-12-14 06:46:38.789703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.933 [2024-12-14 06:46:38.794639] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.933 [2024-12-14 06:46:38.794981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.933 [2024-12-14 06:46:38.795012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.933 [2024-12-14 06:46:38.799874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.933 [2024-12-14 06:46:38.800390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.933 [2024-12-14 06:46:38.800438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.933 [2024-12-14 06:46:38.805314] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.933 [2024-12-14 06:46:38.805623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.933 [2024-12-14 06:46:38.805657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.933 [2024-12-14 06:46:38.810779] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.933 [2024-12-14 06:46:38.811138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.933 [2024-12-14 06:46:38.811174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.933 [2024-12-14 06:46:38.815913] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.933 [2024-12-14 06:46:38.816225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.933 [2024-12-14 06:46:38.816253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.933 [2024-12-14 06:46:38.821229] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.933 [2024-12-14 06:46:38.821574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.933 [2024-12-14 06:46:38.821615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.933 [2024-12-14 06:46:38.826891] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.933 [2024-12-14 06:46:38.827262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.933 [2024-12-14 06:46:38.827336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.933 [2024-12-14 06:46:38.832536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.933 [2024-12-14 06:46:38.832856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.933 [2024-12-14 06:46:38.832894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.933 [2024-12-14 06:46:38.838037] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.933 [2024-12-14 06:46:38.838395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.933 [2024-12-14 06:46:38.838424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.933 [2024-12-14 06:46:38.843597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.933 [2024-12-14 06:46:38.844120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.933 [2024-12-14 06:46:38.844155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.933 [2024-12-14 06:46:38.849477] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.933 [2024-12-14 06:46:38.849794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.933 [2024-12-14 06:46:38.849826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.933 [2024-12-14 06:46:38.854865] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.933 [2024-12-14 06:46:38.855234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.933 [2024-12-14 06:46:38.855281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.933 [2024-12-14 06:46:38.860282] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.933 [2024-12-14 06:46:38.860627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.933 [2024-12-14 06:46:38.860656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.933 [2024-12-14 06:46:38.865623] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.933 [2024-12-14 06:46:38.865947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.933 [2024-12-14 06:46:38.865991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.933 [2024-12-14 06:46:38.870820] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.933 [2024-12-14 06:46:38.871205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.933 [2024-12-14 06:46:38.871241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.933 [2024-12-14 06:46:38.876230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.933 [2024-12-14 06:46:38.876559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.933 [2024-12-14 06:46:38.876587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.933 [2024-12-14 06:46:38.881356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.934 [2024-12-14 06:46:38.881642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.934 [2024-12-14 06:46:38.881669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.934 [2024-12-14 06:46:38.886749] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.934 [2024-12-14 06:46:38.887109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.934 [2024-12-14 06:46:38.887140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.934 [2024-12-14 06:46:38.892250] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.934 [2024-12-14 06:46:38.892535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.934 [2024-12-14 06:46:38.892563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.934 [2024-12-14 06:46:38.897206] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.934 [2024-12-14 06:46:38.897491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.934 [2024-12-14 06:46:38.897518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:24.934 [2024-12-14 06:46:38.902350] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.934 [2024-12-14 06:46:38.902635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.934 [2024-12-14 06:46:38.902663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:24.934 [2024-12-14 06:46:38.907281] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.934 [2024-12-14 06:46:38.907617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.934 [2024-12-14 06:46:38.907644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:24.934 [2024-12-14 06:46:38.912344] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.934 [2024-12-14 06:46:38.912627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.934 [2024-12-14 06:46:38.912654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:24.934 [2024-12-14 06:46:38.917577] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:24.934 [2024-12-14 06:46:38.917903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.934 [2024-12-14 06:46:38.917973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.194 [2024-12-14 06:46:38.923039] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.194 [2024-12-14 06:46:38.923382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.194 [2024-12-14 06:46:38.923410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.194 [2024-12-14 06:46:38.928326] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.194 [2024-12-14 06:46:38.928641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.194 [2024-12-14 06:46:38.928669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.194 [2024-12-14 06:46:38.933378] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.194 [2024-12-14 06:46:38.933660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.194 [2024-12-14 06:46:38.933686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.194 [2024-12-14 06:46:38.938156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.194 [2024-12-14 06:46:38.938455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.194 [2024-12-14 06:46:38.938481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.194 [2024-12-14 06:46:38.943034] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.194 [2024-12-14 06:46:38.943379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.194 [2024-12-14 06:46:38.943405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.194 [2024-12-14 06:46:38.947933] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.194 [2024-12-14 06:46:38.948228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.194 [2024-12-14 06:46:38.948254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.194 [2024-12-14 06:46:38.952705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.194 [2024-12-14 06:46:38.953032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.194 [2024-12-14 06:46:38.953065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.194 [2024-12-14 06:46:38.957627] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.194 [2024-12-14 06:46:38.957935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.194 [2024-12-14 06:46:38.957963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.194 [2024-12-14 06:46:38.962539] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.194 [2024-12-14 06:46:38.962835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.194 [2024-12-14 06:46:38.962862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.194 [2024-12-14 06:46:38.967871] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.194 [2024-12-14 06:46:38.968406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.194 [2024-12-14 06:46:38.968455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.194 [2024-12-14 06:46:38.973525] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.194 [2024-12-14 06:46:38.973818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.194 [2024-12-14 06:46:38.973846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.194 [2024-12-14 06:46:38.978966] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.194 [2024-12-14 06:46:38.979336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.194 [2024-12-14 06:46:38.979364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.194 [2024-12-14 06:46:38.984672] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.194 [2024-12-14 06:46:38.985025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.194 [2024-12-14 06:46:38.985063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.194 [2024-12-14 06:46:38.990030] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.194 [2024-12-14 06:46:38.990384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.194 [2024-12-14 06:46:38.990410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.194 [2024-12-14 06:46:38.995388] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.194 [2024-12-14 06:46:38.995845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.194 [2024-12-14 06:46:38.995870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.194 [2024-12-14 06:46:39.000900] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.194 [2024-12-14 06:46:39.001245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.194 [2024-12-14 06:46:39.001274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.194 [2024-12-14 06:46:39.006555] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.194 [2024-12-14 06:46:39.006849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.194 [2024-12-14 06:46:39.006877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.194 [2024-12-14 06:46:39.011894] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.194 [2024-12-14 06:46:39.012456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.194 [2024-12-14 06:46:39.012504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.194 [2024-12-14 06:46:39.017212] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.194 [2024-12-14 06:46:39.017493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.194 [2024-12-14 06:46:39.017520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.194 [2024-12-14 06:46:39.022002] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.194 [2024-12-14 06:46:39.022303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.194 [2024-12-14 06:46:39.022330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.194 [2024-12-14 06:46:39.026785] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.194 [2024-12-14 06:46:39.027141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.194 [2024-12-14 06:46:39.027176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.194 [2024-12-14 06:46:39.031893] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.194 [2024-12-14 06:46:39.032446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.194 [2024-12-14 06:46:39.032495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.194 [2024-12-14 06:46:39.037277] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.194 [2024-12-14 06:46:39.037563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.194 [2024-12-14 06:46:39.037591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.194 [2024-12-14 06:46:39.042369] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.194 [2024-12-14 06:46:39.042647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.194 [2024-12-14 06:46:39.042674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.194 [2024-12-14 06:46:39.047239] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.194 [2024-12-14 06:46:39.047556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.195 [2024-12-14 06:46:39.047585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.195 [2024-12-14 06:46:39.052149] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.195 [2024-12-14 06:46:39.052429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.195 [2024-12-14 06:46:39.052456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.195 [2024-12-14 06:46:39.056857] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.195 [2024-12-14 06:46:39.057222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.195 [2024-12-14 06:46:39.057262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.195 [2024-12-14 06:46:39.061745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.195 [2024-12-14 06:46:39.062073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.195 [2024-12-14 06:46:39.062106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.195 [2024-12-14 06:46:39.066518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.195 [2024-12-14 06:46:39.066797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.195 [2024-12-14 06:46:39.066824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.195 [2024-12-14 06:46:39.071371] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.195 [2024-12-14 06:46:39.071649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.195 [2024-12-14 06:46:39.071675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.195 [2024-12-14 06:46:39.076350] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.195 [2024-12-14 06:46:39.076631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.195 [2024-12-14 06:46:39.076657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.195 [2024-12-14 06:46:39.081149] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.195 [2024-12-14 06:46:39.081426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.195 [2024-12-14 06:46:39.081453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.195 [2024-12-14 06:46:39.086176] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.195 [2024-12-14 06:46:39.086471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.195 [2024-12-14 06:46:39.086498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.195 [2024-12-14 06:46:39.091093] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.195 [2024-12-14 06:46:39.091430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.195 [2024-12-14 06:46:39.091457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.195 [2024-12-14 06:46:39.096103] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.195 [2024-12-14 06:46:39.096401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.195 [2024-12-14 06:46:39.096427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.195 [2024-12-14 06:46:39.100967] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.195 [2024-12-14 06:46:39.101251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.195 [2024-12-14 06:46:39.101277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.195 [2024-12-14 06:46:39.105753] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.195 [2024-12-14 06:46:39.106087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.195 [2024-12-14 06:46:39.106121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.195 [2024-12-14 06:46:39.110627] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.195 [2024-12-14 06:46:39.110990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.195 [2024-12-14 06:46:39.111023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.195 [2024-12-14 06:46:39.115546] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.195 [2024-12-14 06:46:39.116037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.195 [2024-12-14 06:46:39.116090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.195 [2024-12-14 06:46:39.120651] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.195 [2024-12-14 06:46:39.120962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.195 [2024-12-14 06:46:39.120990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.195 [2024-12-14 06:46:39.125535] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.195 [2024-12-14 06:46:39.125815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.195 [2024-12-14 06:46:39.125842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.195 [2024-12-14 06:46:39.130278] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.195 [2024-12-14 06:46:39.130556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.195 [2024-12-14 06:46:39.130581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.195 [2024-12-14 06:46:39.135073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.195 [2024-12-14 06:46:39.135393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.195 [2024-12-14 06:46:39.135420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.195 [2024-12-14 06:46:39.139929] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.195 [2024-12-14 06:46:39.140238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.195 [2024-12-14 06:46:39.140264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.195 [2024-12-14 06:46:39.144915] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.195 [2024-12-14 06:46:39.145273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.195 [2024-12-14 06:46:39.145301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.195 [2024-12-14 06:46:39.150300] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.195 [2024-12-14 06:46:39.150594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.195 [2024-12-14 06:46:39.150621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.195 [2024-12-14 06:46:39.155165] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.195 [2024-12-14 06:46:39.155528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.195 [2024-12-14 06:46:39.155554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.195 [2024-12-14 06:46:39.160060] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.195 [2024-12-14 06:46:39.160340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.195 [2024-12-14 06:46:39.160367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.195 [2024-12-14 06:46:39.164861] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.195 [2024-12-14 06:46:39.165179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.195 [2024-12-14 06:46:39.165206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.195 [2024-12-14 06:46:39.169746] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.195 [2024-12-14 06:46:39.170055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.195 [2024-12-14 06:46:39.170084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.195 [2024-12-14 06:46:39.174625] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.195 [2024-12-14 06:46:39.174969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.195 [2024-12-14 06:46:39.174999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.195 [2024-12-14 06:46:39.179634] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.195 [2024-12-14 06:46:39.180195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.195 [2024-12-14 06:46:39.180228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.456 [2024-12-14 06:46:39.185306] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.456 [2024-12-14 06:46:39.185607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.456 [2024-12-14 06:46:39.185634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.456 [2024-12-14 06:46:39.190421] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.456 [2024-12-14 06:46:39.190702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.456 [2024-12-14 06:46:39.190729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.456 [2024-12-14 06:46:39.195378] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.456 [2024-12-14 06:46:39.195656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.456 [2024-12-14 06:46:39.195683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.456 [2024-12-14 06:46:39.200180] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.456 [2024-12-14 06:46:39.200458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.456 [2024-12-14 06:46:39.200485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.456 [2024-12-14 06:46:39.205029] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.456 [2024-12-14 06:46:39.205310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.456 [2024-12-14 06:46:39.205337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.456 [2024-12-14 06:46:39.209767] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.456 [2024-12-14 06:46:39.210077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.456 [2024-12-14 06:46:39.210108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.456 [2024-12-14 06:46:39.214692] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.456 [2024-12-14 06:46:39.215033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.456 [2024-12-14 06:46:39.215065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.456 [2024-12-14 06:46:39.219528] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.456 [2024-12-14 06:46:39.220006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.456 [2024-12-14 06:46:39.220035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.456 [2024-12-14 06:46:39.224586] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.456 [2024-12-14 06:46:39.224867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.456 [2024-12-14 06:46:39.224902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.456 [2024-12-14 06:46:39.229384] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.456 [2024-12-14 06:46:39.229665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.456 [2024-12-14 06:46:39.229691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.456 [2024-12-14 06:46:39.234131] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.456 [2024-12-14 06:46:39.234410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.456 [2024-12-14 06:46:39.234436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.456 [2024-12-14 06:46:39.238830] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.456 [2024-12-14 06:46:39.239243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.456 [2024-12-14 06:46:39.239322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.456 [2024-12-14 06:46:39.243772] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.456 [2024-12-14 06:46:39.244278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.456 [2024-12-14 06:46:39.244325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.456 [2024-12-14 06:46:39.248815] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.456 [2024-12-14 06:46:39.249106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.456 [2024-12-14 06:46:39.249128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.456 [2024-12-14 06:46:39.253479] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.456 [2024-12-14 06:46:39.253760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.456 [2024-12-14 06:46:39.253787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.456 [2024-12-14 06:46:39.258272] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.456 [2024-12-14 06:46:39.258571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.456 [2024-12-14 06:46:39.258599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.456 [2024-12-14 06:46:39.263170] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.456 [2024-12-14 06:46:39.263491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.456 [2024-12-14 06:46:39.263519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.456 [2024-12-14 06:46:39.267988] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.456 [2024-12-14 06:46:39.268267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.456 [2024-12-14 06:46:39.268294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.456 [2024-12-14 06:46:39.272763] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.456 [2024-12-14 06:46:39.273074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.456 [2024-12-14 06:46:39.273101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.456 [2024-12-14 06:46:39.277613] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.456 [2024-12-14 06:46:39.277894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.456 [2024-12-14 06:46:39.277931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.456 [2024-12-14 06:46:39.282374] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.456 [2024-12-14 06:46:39.282830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.456 [2024-12-14 06:46:39.282863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.456 [2024-12-14 06:46:39.287700] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.456 [2024-12-14 06:46:39.288029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.456 [2024-12-14 06:46:39.288061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.456 [2024-12-14 06:46:39.292753] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.456 [2024-12-14 06:46:39.293115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.456 [2024-12-14 06:46:39.293146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.456 [2024-12-14 06:46:39.297763] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.456 [2024-12-14 06:46:39.298248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.456 [2024-12-14 06:46:39.298281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.457 [2024-12-14 06:46:39.302731] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.457 [2024-12-14 06:46:39.303116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.457 [2024-12-14 06:46:39.303151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.457 [2024-12-14 06:46:39.307658] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.457 [2024-12-14 06:46:39.307955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.457 [2024-12-14 06:46:39.307994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.457 [2024-12-14 06:46:39.312497] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.457 [2024-12-14 06:46:39.312776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.457 [2024-12-14 06:46:39.312802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.457 [2024-12-14 06:46:39.317380] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.457 [2024-12-14 06:46:39.317836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.457 [2024-12-14 06:46:39.317869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.457 [2024-12-14 06:46:39.322493] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.457 [2024-12-14 06:46:39.322775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.457 [2024-12-14 06:46:39.322801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.457 [2024-12-14 06:46:39.327453] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.457 [2024-12-14 06:46:39.327732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.457 [2024-12-14 06:46:39.327759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.457 [2024-12-14 06:46:39.332240] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.457 [2024-12-14 06:46:39.332520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.457 [2024-12-14 06:46:39.332547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.457 [2024-12-14 06:46:39.337127] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.457 [2024-12-14 06:46:39.337427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.457 [2024-12-14 06:46:39.337454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.457 [2024-12-14 06:46:39.341980] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.457 [2024-12-14 06:46:39.342279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.457 [2024-12-14 06:46:39.342310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.457 [2024-12-14 06:46:39.346775] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.457 [2024-12-14 06:46:39.347150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.457 [2024-12-14 06:46:39.347185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.457 [2024-12-14 06:46:39.351841] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.457 [2024-12-14 06:46:39.352170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.457 [2024-12-14 06:46:39.352197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.457 [2024-12-14 06:46:39.356673] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.457 [2024-12-14 06:46:39.357170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.457 [2024-12-14 06:46:39.357219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.457 [2024-12-14 06:46:39.361833] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.457 [2024-12-14 06:46:39.362133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.457 [2024-12-14 06:46:39.362160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.457 [2024-12-14 06:46:39.366605] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.457 [2024-12-14 06:46:39.366883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.457 [2024-12-14 06:46:39.366944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.457 [2024-12-14 06:46:39.371404] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.457 [2024-12-14 06:46:39.371682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.457 [2024-12-14 06:46:39.371709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.457 [2024-12-14 06:46:39.376202] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.457 [2024-12-14 06:46:39.376514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.457 [2024-12-14 06:46:39.376541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.457 [2024-12-14 06:46:39.381076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.457 [2024-12-14 06:46:39.381377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.457 [2024-12-14 06:46:39.381403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.457 [2024-12-14 06:46:39.386275] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.457 [2024-12-14 06:46:39.386562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.457 [2024-12-14 06:46:39.386590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.457 [2024-12-14 06:46:39.391236] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.457 [2024-12-14 06:46:39.391584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.457 [2024-12-14 06:46:39.391611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.457 [2024-12-14 06:46:39.396133] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.457 [2024-12-14 06:46:39.396416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.457 [2024-12-14 06:46:39.396442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.457 [2024-12-14 06:46:39.401077] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.457 [2024-12-14 06:46:39.401398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.457 [2024-12-14 06:46:39.401440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.457 [2024-12-14 06:46:39.406215] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.457 [2024-12-14 06:46:39.406533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.457 [2024-12-14 06:46:39.406560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.457 [2024-12-14 06:46:39.411322] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.457 [2024-12-14 06:46:39.411619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.457 [2024-12-14 06:46:39.411646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.457 [2024-12-14 06:46:39.416211] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.457 [2024-12-14 06:46:39.416490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.457 [2024-12-14 06:46:39.416517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.457 [2024-12-14 06:46:39.421108] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.457 [2024-12-14 06:46:39.421408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.457 [2024-12-14 06:46:39.421435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.457 [2024-12-14 06:46:39.425952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.457 [2024-12-14 06:46:39.426232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.457 [2024-12-14 06:46:39.426259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.457 [2024-12-14 06:46:39.430752] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.457 [2024-12-14 06:46:39.431098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.457 [2024-12-14 06:46:39.431129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.457 [2024-12-14 06:46:39.435727] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.457 [2024-12-14 06:46:39.436237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.458 [2024-12-14 06:46:39.436289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.458 [2024-12-14 06:46:39.440952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.458 [2024-12-14 06:46:39.441251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.458 [2024-12-14 06:46:39.441295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.720 [2024-12-14 06:46:39.446217] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.720 [2024-12-14 06:46:39.446565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.720 [2024-12-14 06:46:39.446595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.720 [2024-12-14 06:46:39.451485] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.720 [2024-12-14 06:46:39.452019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.720 [2024-12-14 06:46:39.452081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.720 [2024-12-14 06:46:39.456685] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.720 [2024-12-14 06:46:39.456999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.720 [2024-12-14 06:46:39.457027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.720 [2024-12-14 06:46:39.461553] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.720 [2024-12-14 06:46:39.461833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.720 [2024-12-14 06:46:39.461859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.720 [2024-12-14 06:46:39.466473] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.720 [2024-12-14 06:46:39.466752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.720 [2024-12-14 06:46:39.466780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.720 [2024-12-14 06:46:39.471371] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.720 [2024-12-14 06:46:39.471650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.720 [2024-12-14 06:46:39.471678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.720 [2024-12-14 06:46:39.476231] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.720 [2024-12-14 06:46:39.476527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.720 [2024-12-14 06:46:39.476554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.720 [2024-12-14 06:46:39.481192] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.720 [2024-12-14 06:46:39.481489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.720 [2024-12-14 06:46:39.481517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.720 [2024-12-14 06:46:39.486023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.720 [2024-12-14 06:46:39.486302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.720 [2024-12-14 06:46:39.486330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.720 [2024-12-14 06:46:39.490731] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.720 [2024-12-14 06:46:39.491126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.720 [2024-12-14 06:46:39.491162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.720 [2024-12-14 06:46:39.495752] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.720 [2024-12-14 06:46:39.496250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.720 [2024-12-14 06:46:39.496296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.720 [2024-12-14 06:46:39.501080] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.720 [2024-12-14 06:46:39.501385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.720 [2024-12-14 06:46:39.501412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.720 [2024-12-14 06:46:39.506252] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.720 [2024-12-14 06:46:39.506558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.720 [2024-12-14 06:46:39.506586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.720 [2024-12-14 06:46:39.511532] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.720 [2024-12-14 06:46:39.512069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.720 [2024-12-14 06:46:39.512104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.720 [2024-12-14 06:46:39.517394] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.720 [2024-12-14 06:46:39.517676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.720 [2024-12-14 06:46:39.517702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.720 [2024-12-14 06:46:39.522830] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.720 [2024-12-14 06:46:39.523213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.720 [2024-12-14 06:46:39.523399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.721 [2024-12-14 06:46:39.528487] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.721 [2024-12-14 06:46:39.528793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.721 [2024-12-14 06:46:39.528822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.721 [2024-12-14 06:46:39.533706] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.721 [2024-12-14 06:46:39.534059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.721 [2024-12-14 06:46:39.534093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.721 [2024-12-14 06:46:39.538726] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.721 [2024-12-14 06:46:39.539089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.721 [2024-12-14 06:46:39.539124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.721 [2024-12-14 06:46:39.543847] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.721 [2024-12-14 06:46:39.544384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.721 [2024-12-14 06:46:39.544431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.721 [2024-12-14 06:46:39.549272] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.721 [2024-12-14 06:46:39.549558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.721 [2024-12-14 06:46:39.549585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.721 [2024-12-14 06:46:39.554149] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.721 [2024-12-14 06:46:39.554436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.721 [2024-12-14 06:46:39.554464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.721 [2024-12-14 06:46:39.559066] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.721 [2024-12-14 06:46:39.559405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.721 [2024-12-14 06:46:39.559432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.721 [2024-12-14 06:46:39.563862] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.721 [2024-12-14 06:46:39.564214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.721 [2024-12-14 06:46:39.564277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.721 [2024-12-14 06:46:39.568747] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.721 [2024-12-14 06:46:39.569041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.721 [2024-12-14 06:46:39.569069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.721 [2024-12-14 06:46:39.573505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.721 [2024-12-14 06:46:39.573784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.721 [2024-12-14 06:46:39.573811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.721 [2024-12-14 06:46:39.578346] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.721 [2024-12-14 06:46:39.578624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.721 [2024-12-14 06:46:39.578651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.721 [2024-12-14 06:46:39.583103] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.721 [2024-12-14 06:46:39.583437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.721 [2024-12-14 06:46:39.583464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.721 [2024-12-14 06:46:39.588048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.721 [2024-12-14 06:46:39.588337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.721 [2024-12-14 06:46:39.588364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.721 [2024-12-14 06:46:39.592838] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.721 [2024-12-14 06:46:39.593184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.721 [2024-12-14 06:46:39.593233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.721 [2024-12-14 06:46:39.597671] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.721 [2024-12-14 06:46:39.597999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.721 [2024-12-14 06:46:39.598050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.721 [2024-12-14 06:46:39.602460] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.721 [2024-12-14 06:46:39.602966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.721 [2024-12-14 06:46:39.603015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.721 [2024-12-14 06:46:39.607553] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.721 [2024-12-14 06:46:39.607834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.721 [2024-12-14 06:46:39.607861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.721 [2024-12-14 06:46:39.612545] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.721 [2024-12-14 06:46:39.612824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.721 [2024-12-14 06:46:39.612850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.721 [2024-12-14 06:46:39.617348] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.721 [2024-12-14 06:46:39.617628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.721 [2024-12-14 06:46:39.617654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.721 [2024-12-14 06:46:39.622160] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.721 [2024-12-14 06:46:39.622438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.721 [2024-12-14 06:46:39.622464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.721 [2024-12-14 06:46:39.627073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.721 [2024-12-14 06:46:39.627431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.721 [2024-12-14 06:46:39.627458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.721 [2024-12-14 06:46:39.631915] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.721 [2024-12-14 06:46:39.632262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.721 [2024-12-14 06:46:39.632289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.721 [2024-12-14 06:46:39.636770] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.721 [2024-12-14 06:46:39.637103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.721 [2024-12-14 06:46:39.637135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.721 [2024-12-14 06:46:39.641699] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.721 [2024-12-14 06:46:39.642174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.721 [2024-12-14 06:46:39.642234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.721 [2024-12-14 06:46:39.646652] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.721 [2024-12-14 06:46:39.647027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.721 [2024-12-14 06:46:39.647057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.721 [2024-12-14 06:46:39.651574] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.721 [2024-12-14 06:46:39.651853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.721 [2024-12-14 06:46:39.651892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.721 [2024-12-14 06:46:39.656315] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.721 [2024-12-14 06:46:39.656599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.721 [2024-12-14 06:46:39.656626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.721 [2024-12-14 06:46:39.661171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.721 [2024-12-14 06:46:39.661454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.722 [2024-12-14 06:46:39.661481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.722 [2024-12-14 06:46:39.665996] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.722 [2024-12-14 06:46:39.666287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.722 [2024-12-14 06:46:39.666317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.722 [2024-12-14 06:46:39.670884] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.722 [2024-12-14 06:46:39.671306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.722 [2024-12-14 06:46:39.671347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.722 [2024-12-14 06:46:39.675874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.722 [2024-12-14 06:46:39.676247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.722 [2024-12-14 06:46:39.676295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.722 [2024-12-14 06:46:39.680822] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.722 [2024-12-14 06:46:39.681308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.722 [2024-12-14 06:46:39.681369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.722 [2024-12-14 06:46:39.685958] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.722 [2024-12-14 06:46:39.686320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.722 [2024-12-14 06:46:39.686485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:25.722 [2024-12-14 06:46:39.691078] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.722 [2024-12-14 06:46:39.691427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.722 [2024-12-14 06:46:39.691455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.722 [2024-12-14 06:46:39.696027] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.722 [2024-12-14 06:46:39.696335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.722 [2024-12-14 06:46:39.696362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:25.722 [2024-12-14 06:46:39.700866] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.722 [2024-12-14 06:46:39.701326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.722 [2024-12-14 06:46:39.701358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:25.722 [2024-12-14 06:46:39.706215] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:25.722 [2024-12-14 06:46:39.706522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.722 [2024-12-14 06:46:39.706549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.018 [2024-12-14 06:46:39.712045] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.018 [2024-12-14 06:46:39.712351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.018 [2024-12-14 06:46:39.712379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.018 [2024-12-14 06:46:39.717650] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.018 [2024-12-14 06:46:39.717987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.018 [2024-12-14 06:46:39.718017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.018 [2024-12-14 06:46:39.722880] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.018 [2024-12-14 06:46:39.723239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.018 [2024-12-14 06:46:39.723280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.018 [2024-12-14 06:46:39.728446] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.018 [2024-12-14 06:46:39.728784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.018 [2024-12-14 06:46:39.728813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.018 [2024-12-14 06:46:39.734089] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.018 [2024-12-14 06:46:39.734478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.018 [2024-12-14 06:46:39.734507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.018 [2024-12-14 06:46:39.739384] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.018 [2024-12-14 06:46:39.739677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.018 [2024-12-14 06:46:39.739705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.018 [2024-12-14 06:46:39.744448] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.018 [2024-12-14 06:46:39.744734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.018 [2024-12-14 06:46:39.744761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.018 [2024-12-14 06:46:39.749445] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.018 [2024-12-14 06:46:39.749732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.018 [2024-12-14 06:46:39.749759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.018 [2024-12-14 06:46:39.754343] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.018 [2024-12-14 06:46:39.754628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.018 [2024-12-14 06:46:39.754655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.018 [2024-12-14 06:46:39.759349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.018 [2024-12-14 06:46:39.759641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.018 [2024-12-14 06:46:39.759668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.018 [2024-12-14 06:46:39.764410] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.018 [2024-12-14 06:46:39.764688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.018 [2024-12-14 06:46:39.764714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.018 [2024-12-14 06:46:39.769253] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.018 [2024-12-14 06:46:39.769539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.018 [2024-12-14 06:46:39.769566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.018 [2024-12-14 06:46:39.774072] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.018 [2024-12-14 06:46:39.774389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.018 [2024-12-14 06:46:39.774416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.018 [2024-12-14 06:46:39.779179] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.018 [2024-12-14 06:46:39.779495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.019 [2024-12-14 06:46:39.779522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.019 [2024-12-14 06:46:39.784124] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.019 [2024-12-14 06:46:39.784437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.019 [2024-12-14 06:46:39.784466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.019 [2024-12-14 06:46:39.789067] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.019 [2024-12-14 06:46:39.789370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.019 [2024-12-14 06:46:39.789397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.019 [2024-12-14 06:46:39.793932] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.019 [2024-12-14 06:46:39.794211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.019 [2024-12-14 06:46:39.794237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.019 [2024-12-14 06:46:39.798632] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.019 [2024-12-14 06:46:39.798968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.019 [2024-12-14 06:46:39.798997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.019 [2024-12-14 06:46:39.803486] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.019 [2024-12-14 06:46:39.803964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.019 [2024-12-14 06:46:39.804024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.019 [2024-12-14 06:46:39.808534] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.019 [2024-12-14 06:46:39.808817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.019 [2024-12-14 06:46:39.808844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.019 [2024-12-14 06:46:39.813377] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.019 [2024-12-14 06:46:39.813655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.019 [2024-12-14 06:46:39.813682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.019 [2024-12-14 06:46:39.818253] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.019 [2024-12-14 06:46:39.818535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.019 [2024-12-14 06:46:39.818561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.019 [2024-12-14 06:46:39.823156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.019 [2024-12-14 06:46:39.823453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.019 [2024-12-14 06:46:39.823479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.019 [2024-12-14 06:46:39.828007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.019 [2024-12-14 06:46:39.828308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.019 [2024-12-14 06:46:39.828334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.019 [2024-12-14 06:46:39.832889] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.019 [2024-12-14 06:46:39.833206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.019 [2024-12-14 06:46:39.833233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.019 [2024-12-14 06:46:39.837670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.019 [2024-12-14 06:46:39.837980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.019 [2024-12-14 06:46:39.838007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.019 [2024-12-14 06:46:39.842506] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.019 [2024-12-14 06:46:39.842784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.019 [2024-12-14 06:46:39.842810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.019 [2024-12-14 06:46:39.847374] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.019 [2024-12-14 06:46:39.847652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.019 [2024-12-14 06:46:39.847678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.019 [2024-12-14 06:46:39.852177] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.019 [2024-12-14 06:46:39.852479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.019 [2024-12-14 06:46:39.852506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.019 [2024-12-14 06:46:39.857116] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.019 [2024-12-14 06:46:39.857398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.019 [2024-12-14 06:46:39.857426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.019 [2024-12-14 06:46:39.862002] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.019 [2024-12-14 06:46:39.862289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.019 [2024-12-14 06:46:39.862330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.019 [2024-12-14 06:46:39.866706] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.019 [2024-12-14 06:46:39.867077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.019 [2024-12-14 06:46:39.867111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.019 [2024-12-14 06:46:39.871836] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.019 [2024-12-14 06:46:39.872351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.019 [2024-12-14 06:46:39.872412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.019 [2024-12-14 06:46:39.877476] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.019 [2024-12-14 06:46:39.877779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.019 [2024-12-14 06:46:39.877806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.019 [2024-12-14 06:46:39.882862] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.019 [2024-12-14 06:46:39.883247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.019 [2024-12-14 06:46:39.883284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.019 [2024-12-14 06:46:39.888344] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.019 [2024-12-14 06:46:39.888632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.019 [2024-12-14 06:46:39.888659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.019 [2024-12-14 06:46:39.893612] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.019 [2024-12-14 06:46:39.893898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.019 [2024-12-14 06:46:39.893952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.019 [2024-12-14 06:46:39.898839] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.019 [2024-12-14 06:46:39.899229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.019 [2024-12-14 06:46:39.899265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.019 [2024-12-14 06:46:39.904005] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.019 [2024-12-14 06:46:39.904337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.019 [2024-12-14 06:46:39.904365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.019 [2024-12-14 06:46:39.909209] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.019 [2024-12-14 06:46:39.909565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.019 [2024-12-14 06:46:39.909595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.019 [2024-12-14 06:46:39.914273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.019 [2024-12-14 06:46:39.914559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.019 [2024-12-14 06:46:39.914587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.020 [2024-12-14 06:46:39.919412] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.020 [2024-12-14 06:46:39.919888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.020 [2024-12-14 06:46:39.919933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.020 [2024-12-14 06:46:39.925068] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.020 [2024-12-14 06:46:39.925424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.020 [2024-12-14 06:46:39.925511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.020 [2024-12-14 06:46:39.930687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.020 [2024-12-14 06:46:39.931054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.020 [2024-12-14 06:46:39.931091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.020 [2024-12-14 06:46:39.936378] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.020 [2024-12-14 06:46:39.936674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.020 [2024-12-14 06:46:39.936702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.020 [2024-12-14 06:46:39.941749] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.020 [2024-12-14 06:46:39.942093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.020 [2024-12-14 06:46:39.942129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.020 [2024-12-14 06:46:39.947276] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.020 [2024-12-14 06:46:39.947616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.020 [2024-12-14 06:46:39.947644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.020 [2024-12-14 06:46:39.952845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.020 [2024-12-14 06:46:39.953235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.020 [2024-12-14 06:46:39.953271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:26.020 [2024-12-14 06:46:39.958092] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.020 [2024-12-14 06:46:39.958404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.020 [2024-12-14 06:46:39.958433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:26.020 [2024-12-14 06:46:39.963314] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.020 [2024-12-14 06:46:39.963622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.020 [2024-12-14 06:46:39.963650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:26.020 [2024-12-14 06:46:39.968560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x192df60) with pdu=0x2000190fef90 00:16:26.020 [2024-12-14 06:46:39.968928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:26.020 [2024-12-14 06:46:39.968974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:26.020 00:16:26.020 Latency(us) 00:16:26.020 [2024-12-14T06:46:40.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.020 [2024-12-14T06:46:40.012Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:26.020 nvme0n1 : 2.00 6004.91 750.61 0.00 0.00 2658.90 2055.45 13047.62 00:16:26.020 [2024-12-14T06:46:40.012Z] =================================================================================================================== 00:16:26.020 [2024-12-14T06:46:40.012Z] Total : 6004.91 750.61 0.00 0.00 2658.90 2055.45 13047.62 00:16:26.020 0 00:16:26.020 06:46:39 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:16:26.020 06:46:39 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:16:26.020 06:46:39 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:16:26.020 06:46:39 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:16:26.020 | .driver_specific 00:16:26.020 | .nvme_error 00:16:26.020 | .status_code 00:16:26.020 | .command_transient_transport_error' 00:16:26.587 06:46:40 -- host/digest.sh@71 -- # (( 387 > 0 )) 00:16:26.587 06:46:40 -- host/digest.sh@73 -- # killprocess 72112 00:16:26.587 06:46:40 -- common/autotest_common.sh@936 -- # '[' -z 72112 ']' 00:16:26.587 06:46:40 -- common/autotest_common.sh@940 -- # kill -0 72112 00:16:26.587 06:46:40 -- common/autotest_common.sh@941 -- # uname 00:16:26.587 06:46:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:26.587 06:46:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72112 00:16:26.587 killing process with pid 72112 00:16:26.587 Received shutdown signal, test time was about 2.000000 seconds 00:16:26.587 00:16:26.587 Latency(us) 00:16:26.587 [2024-12-14T06:46:40.579Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.587 [2024-12-14T06:46:40.579Z] =================================================================================================================== 00:16:26.587 [2024-12-14T06:46:40.579Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:26.587 06:46:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:26.587 06:46:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:26.587 06:46:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72112' 00:16:26.587 06:46:40 -- common/autotest_common.sh@955 -- # kill 72112 00:16:26.587 06:46:40 -- common/autotest_common.sh@960 -- # wait 72112 00:16:26.587 06:46:40 -- host/digest.sh@115 -- # killprocess 71917 00:16:26.587 06:46:40 -- common/autotest_common.sh@936 -- # '[' -z 71917 ']' 00:16:26.587 06:46:40 -- common/autotest_common.sh@940 -- # kill -0 71917 00:16:26.587 06:46:40 -- common/autotest_common.sh@941 -- # uname 00:16:26.587 06:46:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:26.587 06:46:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71917 00:16:26.587 killing process with pid 71917 00:16:26.587 06:46:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:26.587 06:46:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:26.587 06:46:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71917' 00:16:26.587 06:46:40 -- common/autotest_common.sh@955 -- # kill 71917 00:16:26.587 06:46:40 -- common/autotest_common.sh@960 -- # wait 71917 00:16:26.847 ************************************ 00:16:26.847 END TEST nvmf_digest_error 00:16:26.847 ************************************ 00:16:26.847 00:16:26.847 real 0m17.507s 00:16:26.847 user 0m34.736s 00:16:26.847 sys 0m4.389s 00:16:26.847 06:46:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:26.847 06:46:40 -- common/autotest_common.sh@10 -- # set +x 00:16:26.847 06:46:40 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:16:26.847 06:46:40 -- host/digest.sh@139 -- # nvmftestfini 00:16:26.847 06:46:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:26.847 06:46:40 -- nvmf/common.sh@116 -- # sync 00:16:26.847 06:46:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:26.847 06:46:40 -- nvmf/common.sh@119 -- # set +e 00:16:26.847 06:46:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:26.847 06:46:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:26.847 rmmod nvme_tcp 00:16:26.847 rmmod nvme_fabrics 00:16:27.106 rmmod nvme_keyring 00:16:27.106 06:46:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:27.106 06:46:40 -- nvmf/common.sh@123 -- # set -e 00:16:27.106 06:46:40 -- nvmf/common.sh@124 -- # return 0 00:16:27.106 06:46:40 -- nvmf/common.sh@477 -- # '[' -n 71917 ']' 00:16:27.106 Process with pid 71917 is not found 00:16:27.106 06:46:40 -- nvmf/common.sh@478 -- # killprocess 71917 00:16:27.106 06:46:40 -- common/autotest_common.sh@936 -- # '[' -z 71917 ']' 00:16:27.106 06:46:40 -- common/autotest_common.sh@940 -- # kill -0 71917 00:16:27.106 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (71917) - No such process 00:16:27.106 06:46:40 -- common/autotest_common.sh@963 -- # echo 'Process with pid 71917 is not found' 00:16:27.106 06:46:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:27.106 06:46:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:27.106 06:46:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:27.106 06:46:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:27.106 06:46:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:27.106 06:46:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.106 06:46:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:27.106 06:46:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.106 06:46:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:27.106 00:16:27.106 real 0m33.033s 00:16:27.106 user 1m3.226s 00:16:27.106 sys 0m9.104s 00:16:27.106 06:46:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:27.106 06:46:40 -- common/autotest_common.sh@10 -- # set +x 00:16:27.106 ************************************ 00:16:27.106 END TEST nvmf_digest 00:16:27.106 ************************************ 00:16:27.106 06:46:40 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:16:27.106 06:46:40 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:16:27.106 06:46:40 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:16:27.106 06:46:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:27.106 06:46:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:27.106 06:46:40 -- common/autotest_common.sh@10 -- # set +x 00:16:27.106 ************************************ 00:16:27.106 START TEST nvmf_multipath 00:16:27.106 ************************************ 00:16:27.106 06:46:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:16:27.106 * Looking for test storage... 00:16:27.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:27.106 06:46:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:27.106 06:46:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:27.106 06:46:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:27.365 06:46:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:27.365 06:46:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:27.365 06:46:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:27.365 06:46:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:27.365 06:46:41 -- scripts/common.sh@335 -- # IFS=.-: 00:16:27.365 06:46:41 -- scripts/common.sh@335 -- # read -ra ver1 00:16:27.365 06:46:41 -- scripts/common.sh@336 -- # IFS=.-: 00:16:27.365 06:46:41 -- scripts/common.sh@336 -- # read -ra ver2 00:16:27.365 06:46:41 -- scripts/common.sh@337 -- # local 'op=<' 00:16:27.365 06:46:41 -- scripts/common.sh@339 -- # ver1_l=2 00:16:27.365 06:46:41 -- scripts/common.sh@340 -- # ver2_l=1 00:16:27.365 06:46:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:27.365 06:46:41 -- scripts/common.sh@343 -- # case "$op" in 00:16:27.365 06:46:41 -- scripts/common.sh@344 -- # : 1 00:16:27.365 06:46:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:27.365 06:46:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:27.365 06:46:41 -- scripts/common.sh@364 -- # decimal 1 00:16:27.365 06:46:41 -- scripts/common.sh@352 -- # local d=1 00:16:27.365 06:46:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:27.365 06:46:41 -- scripts/common.sh@354 -- # echo 1 00:16:27.365 06:46:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:27.365 06:46:41 -- scripts/common.sh@365 -- # decimal 2 00:16:27.365 06:46:41 -- scripts/common.sh@352 -- # local d=2 00:16:27.365 06:46:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:27.365 06:46:41 -- scripts/common.sh@354 -- # echo 2 00:16:27.365 06:46:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:27.365 06:46:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:27.365 06:46:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:27.365 06:46:41 -- scripts/common.sh@367 -- # return 0 00:16:27.365 06:46:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:27.365 06:46:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:27.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.365 --rc genhtml_branch_coverage=1 00:16:27.365 --rc genhtml_function_coverage=1 00:16:27.365 --rc genhtml_legend=1 00:16:27.365 --rc geninfo_all_blocks=1 00:16:27.365 --rc geninfo_unexecuted_blocks=1 00:16:27.365 00:16:27.365 ' 00:16:27.365 06:46:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:27.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.365 --rc genhtml_branch_coverage=1 00:16:27.365 --rc genhtml_function_coverage=1 00:16:27.365 --rc genhtml_legend=1 00:16:27.365 --rc geninfo_all_blocks=1 00:16:27.365 --rc geninfo_unexecuted_blocks=1 00:16:27.365 00:16:27.365 ' 00:16:27.365 06:46:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:27.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.365 --rc genhtml_branch_coverage=1 00:16:27.365 --rc genhtml_function_coverage=1 00:16:27.365 --rc genhtml_legend=1 00:16:27.365 --rc geninfo_all_blocks=1 00:16:27.365 --rc geninfo_unexecuted_blocks=1 00:16:27.365 00:16:27.365 ' 00:16:27.365 06:46:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:27.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.365 --rc genhtml_branch_coverage=1 00:16:27.365 --rc genhtml_function_coverage=1 00:16:27.365 --rc genhtml_legend=1 00:16:27.365 --rc geninfo_all_blocks=1 00:16:27.365 --rc geninfo_unexecuted_blocks=1 00:16:27.366 00:16:27.366 ' 00:16:27.366 06:46:41 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:27.366 06:46:41 -- nvmf/common.sh@7 -- # uname -s 00:16:27.366 06:46:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:27.366 06:46:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:27.366 06:46:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:27.366 06:46:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:27.366 06:46:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:27.366 06:46:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:27.366 06:46:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:27.366 06:46:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:27.366 06:46:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:27.366 06:46:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:27.366 06:46:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 00:16:27.366 06:46:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=1897a557-42a7-4044-982a-fbab8b2b3e32 00:16:27.366 06:46:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:27.366 06:46:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:27.366 06:46:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:27.366 06:46:41 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:27.366 06:46:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:27.366 06:46:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:27.366 06:46:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:27.366 06:46:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.366 06:46:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.366 06:46:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.366 06:46:41 -- paths/export.sh@5 -- # export PATH 00:16:27.366 06:46:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.366 06:46:41 -- nvmf/common.sh@46 -- # : 0 00:16:27.366 06:46:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:27.366 06:46:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:27.366 06:46:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:27.366 06:46:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:27.366 06:46:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:27.366 06:46:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:27.366 06:46:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:27.366 06:46:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:27.366 06:46:41 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:27.366 06:46:41 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:27.366 06:46:41 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:27.366 06:46:41 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:27.366 06:46:41 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:27.366 06:46:41 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:27.366 06:46:41 -- host/multipath.sh@30 -- # nvmftestinit 00:16:27.366 06:46:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:27.366 06:46:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:27.366 06:46:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:27.366 06:46:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:27.366 06:46:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:27.366 06:46:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.366 06:46:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:27.366 06:46:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.366 06:46:41 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:27.366 06:46:41 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:27.366 06:46:41 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:27.366 06:46:41 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:27.366 06:46:41 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:27.366 06:46:41 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:27.366 06:46:41 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:27.366 06:46:41 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:27.366 06:46:41 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:27.366 06:46:41 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:27.366 06:46:41 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:27.366 06:46:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:27.366 06:46:41 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:27.366 06:46:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:27.366 06:46:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:27.366 06:46:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:27.366 06:46:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:27.366 06:46:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:27.366 06:46:41 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:27.366 06:46:41 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:27.366 Cannot find device "nvmf_tgt_br" 00:16:27.366 06:46:41 -- nvmf/common.sh@154 -- # true 00:16:27.366 06:46:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:27.366 Cannot find device "nvmf_tgt_br2" 00:16:27.366 06:46:41 -- nvmf/common.sh@155 -- # true 00:16:27.366 06:46:41 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:27.366 06:46:41 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:27.366 Cannot find device "nvmf_tgt_br" 00:16:27.366 06:46:41 -- nvmf/common.sh@157 -- # true 00:16:27.366 06:46:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:27.366 Cannot find device "nvmf_tgt_br2" 00:16:27.366 06:46:41 -- nvmf/common.sh@158 -- # true 00:16:27.366 06:46:41 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:27.366 06:46:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:27.366 06:46:41 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:27.366 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:27.366 06:46:41 -- nvmf/common.sh@161 -- # true 00:16:27.366 06:46:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:27.366 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:27.366 06:46:41 -- nvmf/common.sh@162 -- # true 00:16:27.366 06:46:41 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:27.366 06:46:41 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:27.366 06:46:41 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:27.366 06:46:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:27.366 06:46:41 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:27.366 06:46:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:27.625 06:46:41 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:27.625 06:46:41 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:27.625 06:46:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:27.625 06:46:41 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:27.625 06:46:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:27.625 06:46:41 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:27.625 06:46:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:27.625 06:46:41 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:27.625 06:46:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:27.625 06:46:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:27.625 06:46:41 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:27.625 06:46:41 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:27.625 06:46:41 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:27.625 06:46:41 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:27.625 06:46:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:27.625 06:46:41 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:27.625 06:46:41 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:27.625 06:46:41 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:27.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:27.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:16:27.625 00:16:27.625 --- 10.0.0.2 ping statistics --- 00:16:27.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.625 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:27.625 06:46:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:27.625 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:27.625 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:16:27.625 00:16:27.625 --- 10.0.0.3 ping statistics --- 00:16:27.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.625 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:16:27.625 06:46:41 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:27.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:27.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:27.625 00:16:27.625 --- 10.0.0.1 ping statistics --- 00:16:27.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.625 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:27.625 06:46:41 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:27.625 06:46:41 -- nvmf/common.sh@421 -- # return 0 00:16:27.625 06:46:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:27.625 06:46:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:27.625 06:46:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:27.625 06:46:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:27.625 06:46:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:27.625 06:46:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:27.625 06:46:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:27.625 06:46:41 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:16:27.625 06:46:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:27.625 06:46:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:27.625 06:46:41 -- common/autotest_common.sh@10 -- # set +x 00:16:27.625 06:46:41 -- nvmf/common.sh@469 -- # nvmfpid=72395 00:16:27.625 06:46:41 -- nvmf/common.sh@470 -- # waitforlisten 72395 00:16:27.625 06:46:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:27.625 06:46:41 -- common/autotest_common.sh@829 -- # '[' -z 72395 ']' 00:16:27.625 06:46:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.625 06:46:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:27.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.625 06:46:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.625 06:46:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:27.625 06:46:41 -- common/autotest_common.sh@10 -- # set +x 00:16:27.625 [2024-12-14 06:46:41.568959] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:27.625 [2024-12-14 06:46:41.569060] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.884 [2024-12-14 06:46:41.710611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:27.884 [2024-12-14 06:46:41.779120] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:27.884 [2024-12-14 06:46:41.779576] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.884 [2024-12-14 06:46:41.779642] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.884 [2024-12-14 06:46:41.779953] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.884 [2024-12-14 06:46:41.780100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.884 [2024-12-14 06:46:41.780112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.820 06:46:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:28.820 06:46:42 -- common/autotest_common.sh@862 -- # return 0 00:16:28.820 06:46:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:28.820 06:46:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:28.820 06:46:42 -- common/autotest_common.sh@10 -- # set +x 00:16:28.820 06:46:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:28.820 06:46:42 -- host/multipath.sh@33 -- # nvmfapp_pid=72395 00:16:28.820 06:46:42 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:29.078 [2024-12-14 06:46:42.834655] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:29.078 06:46:42 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:29.337 Malloc0 00:16:29.337 06:46:43 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:29.337 06:46:43 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:29.595 06:46:43 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:29.854 [2024-12-14 06:46:43.767707] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:29.854 06:46:43 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:30.113 [2024-12-14 06:46:44.023856] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:30.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:30.113 06:46:44 -- host/multipath.sh@44 -- # bdevperf_pid=72451 00:16:30.113 06:46:44 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:30.113 06:46:44 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:30.113 06:46:44 -- host/multipath.sh@47 -- # waitforlisten 72451 /var/tmp/bdevperf.sock 00:16:30.113 06:46:44 -- common/autotest_common.sh@829 -- # '[' -z 72451 ']' 00:16:30.113 06:46:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:30.113 06:46:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:30.113 06:46:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:30.113 06:46:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:30.113 06:46:44 -- common/autotest_common.sh@10 -- # set +x 00:16:31.048 06:46:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:31.048 06:46:45 -- common/autotest_common.sh@862 -- # return 0 00:16:31.048 06:46:45 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:31.306 06:46:45 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:16:31.872 Nvme0n1 00:16:31.872 06:46:45 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:32.130 Nvme0n1 00:16:32.130 06:46:45 -- host/multipath.sh@78 -- # sleep 1 00:16:32.130 06:46:45 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:33.065 06:46:46 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:16:33.065 06:46:46 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:33.324 06:46:47 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:33.582 06:46:47 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:16:33.582 06:46:47 -- host/multipath.sh@65 -- # dtrace_pid=72497 00:16:33.582 06:46:47 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72395 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:33.582 06:46:47 -- host/multipath.sh@66 -- # sleep 6 00:16:40.159 06:46:53 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:40.159 06:46:53 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:16:40.159 06:46:53 -- host/multipath.sh@67 -- # active_port=4421 00:16:40.159 06:46:53 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:40.159 Attaching 4 probes... 00:16:40.159 @path[10.0.0.2, 4421]: 19693 00:16:40.159 @path[10.0.0.2, 4421]: 20407 00:16:40.159 @path[10.0.0.2, 4421]: 20237 00:16:40.159 @path[10.0.0.2, 4421]: 20051 00:16:40.159 @path[10.0.0.2, 4421]: 19922 00:16:40.159 06:46:53 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:40.159 06:46:53 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:16:40.159 06:46:53 -- host/multipath.sh@69 -- # sed -n 1p 00:16:40.159 06:46:53 -- host/multipath.sh@69 -- # port=4421 00:16:40.159 06:46:53 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:16:40.159 06:46:53 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:16:40.159 06:46:53 -- host/multipath.sh@72 -- # kill 72497 00:16:40.159 06:46:53 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:40.159 06:46:53 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:16:40.159 06:46:53 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:40.159 06:46:53 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:40.418 06:46:54 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:16:40.418 06:46:54 -- host/multipath.sh@65 -- # dtrace_pid=72618 00:16:40.418 06:46:54 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72395 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:40.418 06:46:54 -- host/multipath.sh@66 -- # sleep 6 00:16:46.978 06:47:00 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:46.978 06:47:00 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:16:46.978 06:47:00 -- host/multipath.sh@67 -- # active_port=4420 00:16:46.978 06:47:00 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:46.978 Attaching 4 probes... 00:16:46.978 @path[10.0.0.2, 4420]: 19853 00:16:46.978 @path[10.0.0.2, 4420]: 19870 00:16:46.978 @path[10.0.0.2, 4420]: 20026 00:16:46.978 @path[10.0.0.2, 4420]: 20003 00:16:46.978 @path[10.0.0.2, 4420]: 20108 00:16:46.978 06:47:00 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:16:46.978 06:47:00 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:46.978 06:47:00 -- host/multipath.sh@69 -- # sed -n 1p 00:16:46.978 06:47:00 -- host/multipath.sh@69 -- # port=4420 00:16:46.978 06:47:00 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:16:46.978 06:47:00 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:16:46.978 06:47:00 -- host/multipath.sh@72 -- # kill 72618 00:16:46.978 06:47:00 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:46.978 06:47:00 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:16:46.978 06:47:00 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:46.978 06:47:00 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:46.978 06:47:00 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:16:46.978 06:47:00 -- host/multipath.sh@65 -- # dtrace_pid=72734 00:16:46.978 06:47:00 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72395 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:46.978 06:47:00 -- host/multipath.sh@66 -- # sleep 6 00:16:53.540 06:47:06 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:16:53.540 06:47:06 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:16:53.540 06:47:07 -- host/multipath.sh@67 -- # active_port=4421 00:16:53.540 06:47:07 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:53.540 Attaching 4 probes... 00:16:53.540 @path[10.0.0.2, 4421]: 15236 00:16:53.540 @path[10.0.0.2, 4421]: 19536 00:16:53.540 @path[10.0.0.2, 4421]: 19733 00:16:53.540 @path[10.0.0.2, 4421]: 19592 00:16:53.540 @path[10.0.0.2, 4421]: 19656 00:16:53.540 06:47:07 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:16:53.540 06:47:07 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:16:53.540 06:47:07 -- host/multipath.sh@69 -- # sed -n 1p 00:16:53.540 06:47:07 -- host/multipath.sh@69 -- # port=4421 00:16:53.540 06:47:07 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:16:53.540 06:47:07 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:16:53.540 06:47:07 -- host/multipath.sh@72 -- # kill 72734 00:16:53.540 06:47:07 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:16:53.540 06:47:07 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:16:53.540 06:47:07 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:53.540 06:47:07 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:53.798 06:47:07 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:16:53.798 06:47:07 -- host/multipath.sh@65 -- # dtrace_pid=72844 00:16:53.798 06:47:07 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72395 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:16:53.798 06:47:07 -- host/multipath.sh@66 -- # sleep 6 00:17:00.363 06:47:13 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:00.363 06:47:13 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:17:00.363 06:47:14 -- host/multipath.sh@67 -- # active_port= 00:17:00.363 06:47:14 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:00.363 Attaching 4 probes... 00:17:00.363 00:17:00.363 00:17:00.363 00:17:00.363 00:17:00.363 00:17:00.363 06:47:14 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:00.363 06:47:14 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:00.363 06:47:14 -- host/multipath.sh@69 -- # sed -n 1p 00:17:00.363 06:47:14 -- host/multipath.sh@69 -- # port= 00:17:00.363 06:47:14 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:17:00.363 06:47:14 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:17:00.363 06:47:14 -- host/multipath.sh@72 -- # kill 72844 00:17:00.363 06:47:14 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:00.363 06:47:14 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:17:00.363 06:47:14 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:00.363 06:47:14 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:00.622 06:47:14 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:17:00.622 06:47:14 -- host/multipath.sh@65 -- # dtrace_pid=72962 00:17:00.622 06:47:14 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72395 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:00.622 06:47:14 -- host/multipath.sh@66 -- # sleep 6 00:17:07.194 06:47:20 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:07.194 06:47:20 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:07.194 06:47:20 -- host/multipath.sh@67 -- # active_port=4421 00:17:07.194 06:47:20 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:07.194 Attaching 4 probes... 00:17:07.194 @path[10.0.0.2, 4421]: 18947 00:17:07.194 @path[10.0.0.2, 4421]: 19585 00:17:07.194 @path[10.0.0.2, 4421]: 19682 00:17:07.194 @path[10.0.0.2, 4421]: 19638 00:17:07.194 @path[10.0.0.2, 4421]: 19281 00:17:07.194 06:47:20 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:07.194 06:47:20 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:07.194 06:47:20 -- host/multipath.sh@69 -- # sed -n 1p 00:17:07.194 06:47:20 -- host/multipath.sh@69 -- # port=4421 00:17:07.194 06:47:20 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:07.194 06:47:20 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:07.194 06:47:20 -- host/multipath.sh@72 -- # kill 72962 00:17:07.194 06:47:20 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:07.194 06:47:20 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:07.194 [2024-12-14 06:47:21.114538] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.114837] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.114871] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.114881] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.114889] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.114943] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.114952] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.114962] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.114971] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.114979] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.114988] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.114997] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.115006] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.115015] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.115024] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.115033] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.115041] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.115050] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.115059] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.115067] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.115076] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.115084] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.115093] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.115102] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.115110] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.115119] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.115127] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.115136] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.115152] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.115162] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.115170] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.115179] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.115188] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.115197] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.115206] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.115215] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 [2024-12-14 06:47:21.115223] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1f230 is same with the state(5) to be set 00:17:07.194 06:47:21 -- host/multipath.sh@101 -- # sleep 1 00:17:08.570 06:47:22 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:17:08.570 06:47:22 -- host/multipath.sh@65 -- # dtrace_pid=73085 00:17:08.570 06:47:22 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72395 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:08.570 06:47:22 -- host/multipath.sh@66 -- # sleep 6 00:17:15.132 06:47:28 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:15.132 06:47:28 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:15.132 06:47:28 -- host/multipath.sh@67 -- # active_port=4420 00:17:15.132 06:47:28 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:15.132 Attaching 4 probes... 00:17:15.132 @path[10.0.0.2, 4420]: 19018 00:17:15.132 @path[10.0.0.2, 4420]: 19108 00:17:15.132 @path[10.0.0.2, 4420]: 19484 00:17:15.132 @path[10.0.0.2, 4420]: 19306 00:17:15.132 @path[10.0.0.2, 4420]: 19631 00:17:15.132 06:47:28 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:15.132 06:47:28 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:15.132 06:47:28 -- host/multipath.sh@69 -- # sed -n 1p 00:17:15.132 06:47:28 -- host/multipath.sh@69 -- # port=4420 00:17:15.132 06:47:28 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:15.132 06:47:28 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:15.132 06:47:28 -- host/multipath.sh@72 -- # kill 73085 00:17:15.132 06:47:28 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:15.132 06:47:28 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:15.132 [2024-12-14 06:47:28.666443] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:15.132 06:47:28 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:15.132 06:47:28 -- host/multipath.sh@111 -- # sleep 6 00:17:21.695 06:47:34 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:17:21.695 06:47:34 -- host/multipath.sh@65 -- # dtrace_pid=73265 00:17:21.695 06:47:34 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 72395 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:21.695 06:47:34 -- host/multipath.sh@66 -- # sleep 6 00:17:28.275 06:47:40 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:28.275 06:47:40 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:28.275 06:47:41 -- host/multipath.sh@67 -- # active_port=4421 00:17:28.275 06:47:41 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:28.275 Attaching 4 probes... 00:17:28.275 @path[10.0.0.2, 4421]: 18991 00:17:28.275 @path[10.0.0.2, 4421]: 19205 00:17:28.275 @path[10.0.0.2, 4421]: 19035 00:17:28.275 @path[10.0.0.2, 4421]: 19206 00:17:28.275 @path[10.0.0.2, 4421]: 19472 00:17:28.275 06:47:41 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:28.275 06:47:41 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:28.275 06:47:41 -- host/multipath.sh@69 -- # sed -n 1p 00:17:28.275 06:47:41 -- host/multipath.sh@69 -- # port=4421 00:17:28.275 06:47:41 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:28.275 06:47:41 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:28.275 06:47:41 -- host/multipath.sh@72 -- # kill 73265 00:17:28.275 06:47:41 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:28.275 06:47:41 -- host/multipath.sh@114 -- # killprocess 72451 00:17:28.275 06:47:41 -- common/autotest_common.sh@936 -- # '[' -z 72451 ']' 00:17:28.275 06:47:41 -- common/autotest_common.sh@940 -- # kill -0 72451 00:17:28.275 06:47:41 -- common/autotest_common.sh@941 -- # uname 00:17:28.275 06:47:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:28.275 06:47:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72451 00:17:28.275 killing process with pid 72451 00:17:28.275 06:47:41 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:28.275 06:47:41 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:28.275 06:47:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72451' 00:17:28.275 06:47:41 -- common/autotest_common.sh@955 -- # kill 72451 00:17:28.275 06:47:41 -- common/autotest_common.sh@960 -- # wait 72451 00:17:28.275 Connection closed with partial response: 00:17:28.275 00:17:28.275 00:17:28.275 06:47:41 -- host/multipath.sh@116 -- # wait 72451 00:17:28.275 06:47:41 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:28.275 [2024-12-14 06:46:44.107760] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:28.275 [2024-12-14 06:46:44.107912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72451 ] 00:17:28.275 [2024-12-14 06:46:44.248931] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.275 [2024-12-14 06:46:44.306520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.275 Running I/O for 90 seconds... 00:17:28.275 [2024-12-14 06:46:54.157328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.275 [2024-12-14 06:46:54.157410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:28.275 [2024-12-14 06:46:54.157484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.275 [2024-12-14 06:46:54.157506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:28.275 [2024-12-14 06:46:54.157528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:91568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.275 [2024-12-14 06:46:54.157545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:28.275 [2024-12-14 06:46:54.157565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:91576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.275 [2024-12-14 06:46:54.157580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:28.275 [2024-12-14 06:46:54.157599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.275 [2024-12-14 06:46:54.157614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:28.275 [2024-12-14 06:46:54.157634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.275 [2024-12-14 06:46:54.157648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:28.275 [2024-12-14 06:46:54.157668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:91600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.275 [2024-12-14 06:46:54.157682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:28.275 [2024-12-14 06:46:54.157701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:91608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.275 [2024-12-14 06:46:54.157716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:28.275 [2024-12-14 06:46:54.157736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:91616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.275 [2024-12-14 06:46:54.157750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:28.275 [2024-12-14 06:46:54.157769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:91624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.275 [2024-12-14 06:46:54.157784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:28.275 [2024-12-14 06:46:54.157803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.275 [2024-12-14 06:46:54.157831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:28.275 [2024-12-14 06:46:54.157854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:91640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.275 [2024-12-14 06:46:54.157869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:28.275 [2024-12-14 06:46:54.157904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.275 [2024-12-14 06:46:54.157934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:28.275 [2024-12-14 06:46:54.157964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:91656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.275 [2024-12-14 06:46:54.157979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.275 [2024-12-14 06:46:54.157999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.275 [2024-12-14 06:46:54.158014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:28.275 [2024-12-14 06:46:54.158034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.275 [2024-12-14 06:46:54.158050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:28.275 [2024-12-14 06:46:54.158074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:91000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.275 [2024-12-14 06:46:54.158090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:28.275 [2024-12-14 06:46:54.158110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:91032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.275 [2024-12-14 06:46:54.158125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:28.275 [2024-12-14 06:46:54.158146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.275 [2024-12-14 06:46:54.158161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:28.275 [2024-12-14 06:46:54.158181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:91072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.275 [2024-12-14 06:46:54.158196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:28.275 [2024-12-14 06:46:54.158216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:91080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.275 [2024-12-14 06:46:54.158231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:28.275 [2024-12-14 06:46:54.158251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:91120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.275 [2024-12-14 06:46:54.158266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:28.275 [2024-12-14 06:46:54.158285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.275 [2024-12-14 06:46:54.158314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:28.275 [2024-12-14 06:46:54.158344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:91672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.275 [2024-12-14 06:46:54.158360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:28.275 [2024-12-14 06:46:54.158673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.275 [2024-12-14 06:46:54.158699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:28.275 [2024-12-14 06:46:54.158722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:91688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.275 [2024-12-14 06:46:54.158738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:28.275 [2024-12-14 06:46:54.158758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.275 [2024-12-14 06:46:54.158772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:28.275 [2024-12-14 06:46:54.158792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:91704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.276 [2024-12-14 06:46:54.158806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.158826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.276 [2024-12-14 06:46:54.158841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.158860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.276 [2024-12-14 06:46:54.158875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.158931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.276 [2024-12-14 06:46:54.158952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.158993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.276 [2024-12-14 06:46:54.159010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.159047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.276 [2024-12-14 06:46:54.159065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.159088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.276 [2024-12-14 06:46:54.159105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.159131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.276 [2024-12-14 06:46:54.159149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.159183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.276 [2024-12-14 06:46:54.159201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.159225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.276 [2024-12-14 06:46:54.159272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.159308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.276 [2024-12-14 06:46:54.159324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.159374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:91792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.276 [2024-12-14 06:46:54.159389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.159408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.276 [2024-12-14 06:46:54.159423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.159442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.276 [2024-12-14 06:46:54.159457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.159476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:91168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.276 [2024-12-14 06:46:54.159491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.159510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.276 [2024-12-14 06:46:54.159525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.159544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:91232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.276 [2024-12-14 06:46:54.159559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.159579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.276 [2024-12-14 06:46:54.159593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.159613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:91256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.276 [2024-12-14 06:46:54.159628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.159647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.276 [2024-12-14 06:46:54.159662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.159682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.276 [2024-12-14 06:46:54.159704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.159727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.276 [2024-12-14 06:46:54.159742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.159762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:91816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.276 [2024-12-14 06:46:54.159777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.159797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:91824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.276 [2024-12-14 06:46:54.159812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.159832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.276 [2024-12-14 06:46:54.159846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.159866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.276 [2024-12-14 06:46:54.159880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.159900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:91848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.276 [2024-12-14 06:46:54.159915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.159952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.276 [2024-12-14 06:46:54.159972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.160007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:91864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.276 [2024-12-14 06:46:54.160025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.160045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:91872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.276 [2024-12-14 06:46:54.160060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.160079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.276 [2024-12-14 06:46:54.160094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.160114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.276 [2024-12-14 06:46:54.160128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.160147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.276 [2024-12-14 06:46:54.160170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.160192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.276 [2024-12-14 06:46:54.160207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.160227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:91912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.276 [2024-12-14 06:46:54.160241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.160261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.276 [2024-12-14 06:46:54.160276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.160296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.276 [2024-12-14 06:46:54.160311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.160333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:91936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.276 [2024-12-14 06:46:54.160348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:28.276 [2024-12-14 06:46:54.160368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:91944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.276 [2024-12-14 06:46:54.160383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.160404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:91952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.277 [2024-12-14 06:46:54.160418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.160438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.277 [2024-12-14 06:46:54.160452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.160472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.277 [2024-12-14 06:46:54.160486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.160506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.277 [2024-12-14 06:46:54.160521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.160540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:91320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.277 [2024-12-14 06:46:54.160555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.160574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:91336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.277 [2024-12-14 06:46:54.160588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.160616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:91352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.277 [2024-12-14 06:46:54.160631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.160651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:91376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.277 [2024-12-14 06:46:54.160665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.160684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.277 [2024-12-14 06:46:54.160699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.160718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:91408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.277 [2024-12-14 06:46:54.160733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.160753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:91440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.277 [2024-12-14 06:46:54.160767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.160787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.277 [2024-12-14 06:46:54.160801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.160820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.277 [2024-12-14 06:46:54.160835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.160854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.277 [2024-12-14 06:46:54.160869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.160903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.277 [2024-12-14 06:46:54.160920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.160940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.277 [2024-12-14 06:46:54.160957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.160976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.277 [2024-12-14 06:46:54.160991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.161010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.277 [2024-12-14 06:46:54.161025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.161053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.277 [2024-12-14 06:46:54.161068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.161088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.277 [2024-12-14 06:46:54.161103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.161122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.277 [2024-12-14 06:46:54.161137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.161156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.277 [2024-12-14 06:46:54.161171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.161191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.277 [2024-12-14 06:46:54.161206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.161225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.277 [2024-12-14 06:46:54.161240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.161260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.277 [2024-12-14 06:46:54.161274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.161294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.277 [2024-12-14 06:46:54.161309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.161328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.277 [2024-12-14 06:46:54.161342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.161362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.277 [2024-12-14 06:46:54.161377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.161396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:91448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.277 [2024-12-14 06:46:54.161411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.161430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:91464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.277 [2024-12-14 06:46:54.161445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.161465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.277 [2024-12-14 06:46:54.161486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.161507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:91480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.277 [2024-12-14 06:46:54.161522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.161542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:91488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.277 [2024-12-14 06:46:54.161557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.161577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.277 [2024-12-14 06:46:54.161591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.161611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:91512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.277 [2024-12-14 06:46:54.161625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.163053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:91520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.277 [2024-12-14 06:46:54.163088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.163120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:92112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.277 [2024-12-14 06:46:54.163140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.163163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.277 [2024-12-14 06:46:54.163180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:28.277 [2024-12-14 06:46:54.163204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.278 [2024-12-14 06:46:54.163221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:46:54.163273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.278 [2024-12-14 06:46:54.163304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:46:54.163339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.278 [2024-12-14 06:46:54.163354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:46:54.163373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.278 [2024-12-14 06:46:54.163388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:46:54.163407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.278 [2024-12-14 06:46:54.163433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:46:54.163455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.278 [2024-12-14 06:46:54.163470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:46:54.163490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.278 [2024-12-14 06:46:54.163505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:46:54.163524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.278 [2024-12-14 06:46:54.163539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:46:54.163559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.278 [2024-12-14 06:46:54.163574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:46:54.163611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.278 [2024-12-14 06:46:54.163632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:46:54.163652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.278 [2024-12-14 06:46:54.163668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:46:54.163687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.278 [2024-12-14 06:46:54.163702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:46:54.163722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:92224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.278 [2024-12-14 06:46:54.163737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:46:54.163757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:92232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.278 [2024-12-14 06:46:54.163772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:46:54.163791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.278 [2024-12-14 06:46:54.163806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:46:54.163826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:92248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.278 [2024-12-14 06:46:54.163841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:47:00.690250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.278 [2024-12-14 06:47:00.690316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:47:00.690409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.278 [2024-12-14 06:47:00.690431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:47:00.690453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.278 [2024-12-14 06:47:00.690468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:47:00.690488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.278 [2024-12-14 06:47:00.690503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:47:00.690522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.278 [2024-12-14 06:47:00.690536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:47:00.690556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.278 [2024-12-14 06:47:00.690570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:47:00.690590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.278 [2024-12-14 06:47:00.690604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:47:00.690623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.278 [2024-12-14 06:47:00.690638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:47:00.690658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.278 [2024-12-14 06:47:00.690672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:47:00.690691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.278 [2024-12-14 06:47:00.690706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:47:00.690726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.278 [2024-12-14 06:47:00.690740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:47:00.690759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.278 [2024-12-14 06:47:00.690773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:47:00.690792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.278 [2024-12-14 06:47:00.690807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:47:00.690839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.278 [2024-12-14 06:47:00.690855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:47:00.690875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.278 [2024-12-14 06:47:00.690889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:47:00.690968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.278 [2024-12-14 06:47:00.690985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:47:00.691006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.278 [2024-12-14 06:47:00.691022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:47:00.691043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.278 [2024-12-14 06:47:00.691059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:47:00.691079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.278 [2024-12-14 06:47:00.691094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:47:00.691115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.278 [2024-12-14 06:47:00.691130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:47:00.691151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.278 [2024-12-14 06:47:00.691167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:47:00.691192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.278 [2024-12-14 06:47:00.691209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:28.278 [2024-12-14 06:47:00.691246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.278 [2024-12-14 06:47:00.691261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.691281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.279 [2024-12-14 06:47:00.691311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.691331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.279 [2024-12-14 06:47:00.691345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.691364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.279 [2024-12-14 06:47:00.691388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.691409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.279 [2024-12-14 06:47:00.691424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.691444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.279 [2024-12-14 06:47:00.691459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.691479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.279 [2024-12-14 06:47:00.691493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.691514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.279 [2024-12-14 06:47:00.691529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.691549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.279 [2024-12-14 06:47:00.691563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.691583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.279 [2024-12-14 06:47:00.691598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.691618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.279 [2024-12-14 06:47:00.691632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.691652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.279 [2024-12-14 06:47:00.691667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.691687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.279 [2024-12-14 06:47:00.691701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.691721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.279 [2024-12-14 06:47:00.691735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.691754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.279 [2024-12-14 06:47:00.691769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.691788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.279 [2024-12-14 06:47:00.691810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.691831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.279 [2024-12-14 06:47:00.691845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.691865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.279 [2024-12-14 06:47:00.691880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.691918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.279 [2024-12-14 06:47:00.691932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.691967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.279 [2024-12-14 06:47:00.691984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.692004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.279 [2024-12-14 06:47:00.692020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.692040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.279 [2024-12-14 06:47:00.692055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.692075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.279 [2024-12-14 06:47:00.692090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.692110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.279 [2024-12-14 06:47:00.692126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.692146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.279 [2024-12-14 06:47:00.692161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.692200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.279 [2024-12-14 06:47:00.692238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.692259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.279 [2024-12-14 06:47:00.692274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.692294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.279 [2024-12-14 06:47:00.692308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.692341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.279 [2024-12-14 06:47:00.692357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.692377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.279 [2024-12-14 06:47:00.692392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.692412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.279 [2024-12-14 06:47:00.692426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.692446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.279 [2024-12-14 06:47:00.692460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.692480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.279 [2024-12-14 06:47:00.692494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.692514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.279 [2024-12-14 06:47:00.692528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:28.279 [2024-12-14 06:47:00.692548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.280 [2024-12-14 06:47:00.692562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.692582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.280 [2024-12-14 06:47:00.692596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.692616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.280 [2024-12-14 06:47:00.692630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.692650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.280 [2024-12-14 06:47:00.692664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.692683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.280 [2024-12-14 06:47:00.692698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.692718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.280 [2024-12-14 06:47:00.692736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.692764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.280 [2024-12-14 06:47:00.692780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.692799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.280 [2024-12-14 06:47:00.692815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.692834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.280 [2024-12-14 06:47:00.692849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.692868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.280 [2024-12-14 06:47:00.692883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.692916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.280 [2024-12-14 06:47:00.692934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.692954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.280 [2024-12-14 06:47:00.692969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.692988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.280 [2024-12-14 06:47:00.693003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.693023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.280 [2024-12-14 06:47:00.693037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.693057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.280 [2024-12-14 06:47:00.693071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.693091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.280 [2024-12-14 06:47:00.693105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.693124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.280 [2024-12-14 06:47:00.693139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.693158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.280 [2024-12-14 06:47:00.693173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.693192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.280 [2024-12-14 06:47:00.693228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.693251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.280 [2024-12-14 06:47:00.693266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.693286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.280 [2024-12-14 06:47:00.693301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.693320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.280 [2024-12-14 06:47:00.693335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.693355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.280 [2024-12-14 06:47:00.693370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.693389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.280 [2024-12-14 06:47:00.693404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.693424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.280 [2024-12-14 06:47:00.693439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.693458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.280 [2024-12-14 06:47:00.693473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.693492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.280 [2024-12-14 06:47:00.693507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.693526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.280 [2024-12-14 06:47:00.693541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.693576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.280 [2024-12-14 06:47:00.693591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.693611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.280 [2024-12-14 06:47:00.693626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.693646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.280 [2024-12-14 06:47:00.693668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.693689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.280 [2024-12-14 06:47:00.693704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.693724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.280 [2024-12-14 06:47:00.693739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.693759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.280 [2024-12-14 06:47:00.693775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.693796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.280 [2024-12-14 06:47:00.693811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.693830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.280 [2024-12-14 06:47:00.693846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.693866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.280 [2024-12-14 06:47:00.693881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.693944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.280 [2024-12-14 06:47:00.693963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.693983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.280 [2024-12-14 06:47:00.693998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:28.280 [2024-12-14 06:47:00.694023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.280 [2024-12-14 06:47:00.694041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:00.694061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.281 [2024-12-14 06:47:00.694077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:00.694097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.281 [2024-12-14 06:47:00.694112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:00.694133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.281 [2024-12-14 06:47:00.694148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:00.694177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.281 [2024-12-14 06:47:00.694193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:00.694214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.281 [2024-12-14 06:47:00.694229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:00.694249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.281 [2024-12-14 06:47:00.694264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:00.694299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.281 [2024-12-14 06:47:00.694314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:00.694333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.281 [2024-12-14 06:47:00.694348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:00.694368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.281 [2024-12-14 06:47:00.694383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:00.694402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.281 [2024-12-14 06:47:00.694418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:00.694437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.281 [2024-12-14 06:47:00.694452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:00.694472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.281 [2024-12-14 06:47:00.694487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:00.694506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.281 [2024-12-14 06:47:00.694521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:00.695520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.281 [2024-12-14 06:47:00.695548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:00.695582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.281 [2024-12-14 06:47:00.695599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:00.695638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.281 [2024-12-14 06:47:00.695658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:00.695687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.281 [2024-12-14 06:47:00.695703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:00.695731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.281 [2024-12-14 06:47:00.695746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:00.695774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.281 [2024-12-14 06:47:00.695790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:00.695817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.281 [2024-12-14 06:47:00.695833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:00.695860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.281 [2024-12-14 06:47:00.695876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:00.695904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.281 [2024-12-14 06:47:00.695919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:00.695960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.281 [2024-12-14 06:47:00.695995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:00.696024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.281 [2024-12-14 06:47:00.696040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:00.696069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.281 [2024-12-14 06:47:00.696085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:00.696113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.281 [2024-12-14 06:47:00.696130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:00.696159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.281 [2024-12-14 06:47:00.696175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:00.696204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.281 [2024-12-14 06:47:00.696228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:00.696258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.281 [2024-12-14 06:47:00.696274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:00.696303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.281 [2024-12-14 06:47:00.696319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:00.696363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.281 [2024-12-14 06:47:00.696399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:00.696428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.281 [2024-12-14 06:47:00.696448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:07.716333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:106720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.281 [2024-12-14 06:47:07.716411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:07.716484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:106728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.281 [2024-12-14 06:47:07.716506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:07.716530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:106736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.281 [2024-12-14 06:47:07.716547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:07.716568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:106744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.281 [2024-12-14 06:47:07.716584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:07.716604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:106752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.281 [2024-12-14 06:47:07.716619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:07.716640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.281 [2024-12-14 06:47:07.716671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:07.716691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:106768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.281 [2024-12-14 06:47:07.716706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:28.281 [2024-12-14 06:47:07.716726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:106776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.282 [2024-12-14 06:47:07.716756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.716779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:106784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.282 [2024-12-14 06:47:07.716811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.716832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:106792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.282 [2024-12-14 06:47:07.716847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.716868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:106800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.282 [2024-12-14 06:47:07.716883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.716904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:106808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.282 [2024-12-14 06:47:07.716919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.716957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:106816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.282 [2024-12-14 06:47:07.716974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.716999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:106824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.282 [2024-12-14 06:47:07.717014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.717035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:106832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.282 [2024-12-14 06:47:07.717050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.717071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:106840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.282 [2024-12-14 06:47:07.717087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.717110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:106848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.282 [2024-12-14 06:47:07.717126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.717147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:106856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.282 [2024-12-14 06:47:07.717163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.717184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:106224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.282 [2024-12-14 06:47:07.717214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.717234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.282 [2024-12-14 06:47:07.717249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.717279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:106240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.282 [2024-12-14 06:47:07.717296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.717317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:106248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.282 [2024-12-14 06:47:07.717332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.717352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:106256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.282 [2024-12-14 06:47:07.717368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.717389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:106288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.282 [2024-12-14 06:47:07.717405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.717429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:106336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.282 [2024-12-14 06:47:07.717446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.717467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:106344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.282 [2024-12-14 06:47:07.717483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.717504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:106864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.282 [2024-12-14 06:47:07.717519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.717539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:106872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.282 [2024-12-14 06:47:07.717555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.717575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:106880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.282 [2024-12-14 06:47:07.717590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.717610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:106888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.282 [2024-12-14 06:47:07.717625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.717646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:106896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.282 [2024-12-14 06:47:07.717661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.717682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.282 [2024-12-14 06:47:07.717698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.717990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:106912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.282 [2024-12-14 06:47:07.718015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.718038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.282 [2024-12-14 06:47:07.718054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.718093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:106928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.282 [2024-12-14 06:47:07.718108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.718130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:106936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.282 [2024-12-14 06:47:07.718146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.718167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:106944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.282 [2024-12-14 06:47:07.718183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.718204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.282 [2024-12-14 06:47:07.718220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.718242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:106960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.282 [2024-12-14 06:47:07.718257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.718279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:106968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.282 [2024-12-14 06:47:07.718294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.718316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:106976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.282 [2024-12-14 06:47:07.718331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.718353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:106984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.282 [2024-12-14 06:47:07.718368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.718390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.282 [2024-12-14 06:47:07.718405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.718427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:106352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.282 [2024-12-14 06:47:07.718442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.718474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:106376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.282 [2024-12-14 06:47:07.718491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.718513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:106384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.282 [2024-12-14 06:47:07.718528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:28.282 [2024-12-14 06:47:07.718550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:106416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.282 [2024-12-14 06:47:07.718566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.718588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:106424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.283 [2024-12-14 06:47:07.718605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.718627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:106432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.283 [2024-12-14 06:47:07.718643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.718665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.283 [2024-12-14 06:47:07.718681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.718703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:106472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.283 [2024-12-14 06:47:07.718719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.718741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:107000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.283 [2024-12-14 06:47:07.718756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.718778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.283 [2024-12-14 06:47:07.718794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.718815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:107016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.283 [2024-12-14 06:47:07.718831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.718852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:107024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.283 [2024-12-14 06:47:07.718868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.718890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.283 [2024-12-14 06:47:07.718945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.718972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.283 [2024-12-14 06:47:07.718998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.719023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:107048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.283 [2024-12-14 06:47:07.719042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.719065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.283 [2024-12-14 06:47:07.719082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.719106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:107064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.283 [2024-12-14 06:47:07.719122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.719147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.283 [2024-12-14 06:47:07.719164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.719192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.283 [2024-12-14 06:47:07.719211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.719235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:107088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.283 [2024-12-14 06:47:07.719282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.719319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.283 [2024-12-14 06:47:07.719335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.719357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:107104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.283 [2024-12-14 06:47:07.719373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.719396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.283 [2024-12-14 06:47:07.719412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.719433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:106480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.283 [2024-12-14 06:47:07.719449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.719471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:106488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.283 [2024-12-14 06:47:07.719486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.719508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:106504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.283 [2024-12-14 06:47:07.719531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.719554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:106520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.283 [2024-12-14 06:47:07.719569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.719591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.283 [2024-12-14 06:47:07.719606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.719628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:106544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.283 [2024-12-14 06:47:07.719644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.719665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:106552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.283 [2024-12-14 06:47:07.719681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.719703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:106568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.283 [2024-12-14 06:47:07.719718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.719740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:107120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.283 [2024-12-14 06:47:07.719756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.719777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.283 [2024-12-14 06:47:07.719793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.719814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.283 [2024-12-14 06:47:07.719830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.719868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.283 [2024-12-14 06:47:07.719884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.719906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.283 [2024-12-14 06:47:07.719922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.719945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:107160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.283 [2024-12-14 06:47:07.719974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:28.283 [2024-12-14 06:47:07.719998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:107168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.283 [2024-12-14 06:47:07.720014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.720044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:107176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.284 [2024-12-14 06:47:07.720061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.720084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:107184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.284 [2024-12-14 06:47:07.720100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.720122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:107192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.284 [2024-12-14 06:47:07.720138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.720160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:107200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.284 [2024-12-14 06:47:07.720176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.720198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:107208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.284 [2024-12-14 06:47:07.720215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.720251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.284 [2024-12-14 06:47:07.720267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.720288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.284 [2024-12-14 06:47:07.720304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.720326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.284 [2024-12-14 06:47:07.720341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.720363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.284 [2024-12-14 06:47:07.720378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.720400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:107248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.284 [2024-12-14 06:47:07.720416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.720438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.284 [2024-12-14 06:47:07.720453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.720475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:106584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.284 [2024-12-14 06:47:07.720490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.720520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.284 [2024-12-14 06:47:07.720537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.720558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:106616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.284 [2024-12-14 06:47:07.720573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.720595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:106648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.284 [2024-12-14 06:47:07.720615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.720637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.284 [2024-12-14 06:47:07.720653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.720675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:106672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.284 [2024-12-14 06:47:07.720690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.720712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:106680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.284 [2024-12-14 06:47:07.720727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.721663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:106704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.284 [2024-12-14 06:47:07.721690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.721724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.284 [2024-12-14 06:47:07.721742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.721771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:107272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.284 [2024-12-14 06:47:07.721788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.721817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.284 [2024-12-14 06:47:07.721832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.721861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.284 [2024-12-14 06:47:07.721877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.721922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.284 [2024-12-14 06:47:07.721942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.721985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.284 [2024-12-14 06:47:07.722003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.722032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.284 [2024-12-14 06:47:07.722048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.722077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.284 [2024-12-14 06:47:07.722093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.722122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.284 [2024-12-14 06:47:07.722137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.722166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.284 [2024-12-14 06:47:07.722182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.722211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.284 [2024-12-14 06:47:07.722227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.722256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.284 [2024-12-14 06:47:07.722274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.722304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:107360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.284 [2024-12-14 06:47:07.722320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.722348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:107368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.284 [2024-12-14 06:47:07.722364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.722393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:107376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.284 [2024-12-14 06:47:07.722409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.722438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:107384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.284 [2024-12-14 06:47:07.722454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:07.722482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:107392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.284 [2024-12-14 06:47:07.722498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:21.115347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:109264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.284 [2024-12-14 06:47:21.115418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:21.115461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.284 [2024-12-14 06:47:21.115479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:21.115494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:109320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.284 [2024-12-14 06:47:21.115508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.284 [2024-12-14 06:47:21.115523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.285 [2024-12-14 06:47:21.115536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.115551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.285 [2024-12-14 06:47:21.115565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.115579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.285 [2024-12-14 06:47:21.115593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.115607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.285 [2024-12-14 06:47:21.115621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.115635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:109408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.285 [2024-12-14 06:47:21.115648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.115663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.285 [2024-12-14 06:47:21.115676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.115691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.285 [2024-12-14 06:47:21.115706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.115722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.285 [2024-12-14 06:47:21.115736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.115751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.285 [2024-12-14 06:47:21.115764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.115780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:109984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.285 [2024-12-14 06:47:21.115793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.115816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.285 [2024-12-14 06:47:21.115832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.115847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:110048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.285 [2024-12-14 06:47:21.115861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.115876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.285 [2024-12-14 06:47:21.115890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.115904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.285 [2024-12-14 06:47:21.115919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.115934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.285 [2024-12-14 06:47:21.115965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.115981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.285 [2024-12-14 06:47:21.115995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.116010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.285 [2024-12-14 06:47:21.116024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.116038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:109448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.285 [2024-12-14 06:47:21.116052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.116066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:109464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.285 [2024-12-14 06:47:21.116080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.116095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.285 [2024-12-14 06:47:21.116109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.116123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:109504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.285 [2024-12-14 06:47:21.116137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.116151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.285 [2024-12-14 06:47:21.116165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.116180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.285 [2024-12-14 06:47:21.116201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.116216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.285 [2024-12-14 06:47:21.116230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.116261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.285 [2024-12-14 06:47:21.116275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.116291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.285 [2024-12-14 06:47:21.116305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.116320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.285 [2024-12-14 06:47:21.116334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.116349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.285 [2024-12-14 06:47:21.116363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.116378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.285 [2024-12-14 06:47:21.116391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.116406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.285 [2024-12-14 06:47:21.116421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.116436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.285 [2024-12-14 06:47:21.116450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.116465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.285 [2024-12-14 06:47:21.116479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.116494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.285 [2024-12-14 06:47:21.116508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.116523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.285 [2024-12-14 06:47:21.116569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.116801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.285 [2024-12-14 06:47:21.116828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.116857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.285 [2024-12-14 06:47:21.116874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.116921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.285 [2024-12-14 06:47:21.116937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.116953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.285 [2024-12-14 06:47:21.116982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.116998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.285 [2024-12-14 06:47:21.117029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.117044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.285 [2024-12-14 06:47:21.117058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.285 [2024-12-14 06:47:21.117074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.285 [2024-12-14 06:47:21.117088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.117104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.286 [2024-12-14 06:47:21.117118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.117133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.286 [2024-12-14 06:47:21.117147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.117164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.286 [2024-12-14 06:47:21.117193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.117208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.286 [2024-12-14 06:47:21.117222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.117237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.286 [2024-12-14 06:47:21.117252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.117267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.286 [2024-12-14 06:47:21.117281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.117296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:109624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.286 [2024-12-14 06:47:21.117317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.117333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.286 [2024-12-14 06:47:21.117347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.117362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.286 [2024-12-14 06:47:21.117376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.117391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.286 [2024-12-14 06:47:21.117405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.117420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.286 [2024-12-14 06:47:21.117434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.117449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.286 [2024-12-14 06:47:21.117463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.117477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.286 [2024-12-14 06:47:21.117491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.117506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.286 [2024-12-14 06:47:21.117520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.117535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.286 [2024-12-14 06:47:21.117549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.117564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.286 [2024-12-14 06:47:21.117578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.117593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.286 [2024-12-14 06:47:21.117607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.117622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.286 [2024-12-14 06:47:21.117635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.117650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.286 [2024-12-14 06:47:21.117665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.117686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.286 [2024-12-14 06:47:21.117702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.117717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.286 [2024-12-14 06:47:21.117731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.117745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.286 [2024-12-14 06:47:21.117759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.117774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.286 [2024-12-14 06:47:21.117788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.117803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.286 [2024-12-14 06:47:21.117817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.117832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.286 [2024-12-14 06:47:21.117846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.117861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.286 [2024-12-14 06:47:21.117875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.117905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.286 [2024-12-14 06:47:21.117920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.117935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.286 [2024-12-14 06:47:21.117949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.117976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.286 [2024-12-14 06:47:21.117998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.118014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.286 [2024-12-14 06:47:21.118029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.118045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.286 [2024-12-14 06:47:21.118059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.118075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:109744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.286 [2024-12-14 06:47:21.118089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.118112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.286 [2024-12-14 06:47:21.118127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.118143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.286 [2024-12-14 06:47:21.118157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.118173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.286 [2024-12-14 06:47:21.118187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.118206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.286 [2024-12-14 06:47:21.118221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.118237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.286 [2024-12-14 06:47:21.118252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.118267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.286 [2024-12-14 06:47:21.118295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.286 [2024-12-14 06:47:21.118310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.286 [2024-12-14 06:47:21.118324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.118339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.287 [2024-12-14 06:47:21.118353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.118368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.287 [2024-12-14 06:47:21.118382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.118397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.287 [2024-12-14 06:47:21.118411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.118426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.287 [2024-12-14 06:47:21.118440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.118455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.287 [2024-12-14 06:47:21.118468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.118483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.287 [2024-12-14 06:47:21.118507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.118523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.287 [2024-12-14 06:47:21.118537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.118552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.287 [2024-12-14 06:47:21.118567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.118582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.287 [2024-12-14 06:47:21.118596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.118610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.287 [2024-12-14 06:47:21.118624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.118639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.287 [2024-12-14 06:47:21.118653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.118668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.287 [2024-12-14 06:47:21.118681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.118697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.287 [2024-12-14 06:47:21.118727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.118742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.287 [2024-12-14 06:47:21.118756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.118772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.287 [2024-12-14 06:47:21.118786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.118802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:109840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.287 [2024-12-14 06:47:21.118817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.118832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:109856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.287 [2024-12-14 06:47:21.118847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.118862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:109872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.287 [2024-12-14 06:47:21.118876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.118963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.287 [2024-12-14 06:47:21.118982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.118998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:109904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.287 [2024-12-14 06:47:21.119014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.119030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:109912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.287 [2024-12-14 06:47:21.119046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.119062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.287 [2024-12-14 06:47:21.119080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.119098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.287 [2024-12-14 06:47:21.119113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.119129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.287 [2024-12-14 06:47:21.119145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.119161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.287 [2024-12-14 06:47:21.119176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.119208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.287 [2024-12-14 06:47:21.119223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.119239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.287 [2024-12-14 06:47:21.119254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.119270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.287 [2024-12-14 06:47:21.119299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.119314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.287 [2024-12-14 06:47:21.119344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.119359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.287 [2024-12-14 06:47:21.119373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.119388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.287 [2024-12-14 06:47:21.119408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.119424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.287 [2024-12-14 06:47:21.119438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.119453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.287 [2024-12-14 06:47:21.119467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.119482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:28.287 [2024-12-14 06:47:21.119496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.119511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.287 [2024-12-14 06:47:21.119525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.119540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.287 [2024-12-14 06:47:21.119554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.119569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.287 [2024-12-14 06:47:21.119582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.119597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.287 [2024-12-14 06:47:21.119613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.119629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.287 [2024-12-14 06:47:21.119643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.119658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:109992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.287 [2024-12-14 06:47:21.119672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.287 [2024-12-14 06:47:21.119687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:110000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.288 [2024-12-14 06:47:21.119701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.288 [2024-12-14 06:47:21.119717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.288 [2024-12-14 06:47:21.119731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.288 [2024-12-14 06:47:21.119746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.288 [2024-12-14 06:47:21.119759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.288 [2024-12-14 06:47:21.119780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.288 [2024-12-14 06:47:21.119795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.288 [2024-12-14 06:47:21.119810] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4c50 is same with the state(5) to be set 00:17:28.288 [2024-12-14 06:47:21.119828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:28.288 [2024-12-14 06:47:21.119838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:28.288 [2024-12-14 06:47:21.119849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110040 len:8 PRP1 0x0 PRP2 0x0 00:17:28.288 [2024-12-14 06:47:21.119862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.288 [2024-12-14 06:47:21.119908] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdc4c50 was disconnected and freed. reset controller. 00:17:28.288 [2024-12-14 06:47:21.120931] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:28.288 [2024-12-14 06:47:21.121015] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda1b20 (9): Bad file descriptor 00:17:28.288 [2024-12-14 06:47:21.121326] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:28.288 [2024-12-14 06:47:21.121398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:28.288 [2024-12-14 06:47:21.121449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:28.288 [2024-12-14 06:47:21.121471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda1b20 with addr=10.0.0.2, port=4421 00:17:28.288 [2024-12-14 06:47:21.121487] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda1b20 is same with the state(5) to be set 00:17:28.288 [2024-12-14 06:47:21.121521] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda1b20 (9): Bad file descriptor 00:17:28.288 [2024-12-14 06:47:21.121551] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:28.288 [2024-12-14 06:47:21.121567] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:28.288 [2024-12-14 06:47:21.121583] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:28.288 [2024-12-14 06:47:21.121613] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:28.288 [2024-12-14 06:47:21.121629] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:28.288 [2024-12-14 06:47:31.171359] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:28.288 Received shutdown signal, test time was about 55.311376 seconds 00:17:28.288 00:17:28.288 Latency(us) 00:17:28.288 [2024-12-14T06:47:42.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.288 [2024-12-14T06:47:42.280Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:28.288 Verification LBA range: start 0x0 length 0x4000 00:17:28.288 Nvme0n1 : 55.31 11159.41 43.59 0.00 0.00 11450.93 443.11 7046430.72 00:17:28.288 [2024-12-14T06:47:42.280Z] =================================================================================================================== 00:17:28.288 [2024-12-14T06:47:42.280Z] Total : 11159.41 43.59 0.00 0.00 11450.93 443.11 7046430.72 00:17:28.288 06:47:41 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:28.288 06:47:41 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:17:28.288 06:47:41 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:28.288 06:47:41 -- host/multipath.sh@125 -- # nvmftestfini 00:17:28.288 06:47:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:28.288 06:47:41 -- nvmf/common.sh@116 -- # sync 00:17:28.288 06:47:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:28.288 06:47:41 -- nvmf/common.sh@119 -- # set +e 00:17:28.288 06:47:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:28.288 06:47:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:28.288 rmmod nvme_tcp 00:17:28.288 rmmod nvme_fabrics 00:17:28.288 rmmod nvme_keyring 00:17:28.288 06:47:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:28.288 06:47:41 -- nvmf/common.sh@123 -- # set -e 00:17:28.288 06:47:41 -- nvmf/common.sh@124 -- # return 0 00:17:28.288 06:47:41 -- nvmf/common.sh@477 -- # '[' -n 72395 ']' 00:17:28.288 06:47:41 -- nvmf/common.sh@478 -- # killprocess 72395 00:17:28.288 06:47:41 -- common/autotest_common.sh@936 -- # '[' -z 72395 ']' 00:17:28.288 06:47:41 -- common/autotest_common.sh@940 -- # kill -0 72395 00:17:28.288 06:47:41 -- common/autotest_common.sh@941 -- # uname 00:17:28.288 06:47:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:28.288 06:47:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72395 00:17:28.288 06:47:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:28.288 06:47:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:28.288 killing process with pid 72395 00:17:28.288 06:47:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72395' 00:17:28.288 06:47:41 -- common/autotest_common.sh@955 -- # kill 72395 00:17:28.288 06:47:41 -- common/autotest_common.sh@960 -- # wait 72395 00:17:28.288 06:47:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:28.288 06:47:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:28.288 06:47:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:28.288 06:47:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:28.288 06:47:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:28.288 06:47:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.288 06:47:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.288 06:47:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.288 06:47:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:28.288 ************************************ 00:17:28.288 END TEST nvmf_multipath 00:17:28.288 ************************************ 00:17:28.288 00:17:28.288 real 1m1.165s 00:17:28.288 user 2m48.694s 00:17:28.288 sys 0m18.398s 00:17:28.288 06:47:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:28.288 06:47:42 -- common/autotest_common.sh@10 -- # set +x 00:17:28.288 06:47:42 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:17:28.288 06:47:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:28.288 06:47:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:28.288 06:47:42 -- common/autotest_common.sh@10 -- # set +x 00:17:28.288 ************************************ 00:17:28.288 START TEST nvmf_timeout 00:17:28.288 ************************************ 00:17:28.288 06:47:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:17:28.548 * Looking for test storage... 00:17:28.548 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:28.548 06:47:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:28.548 06:47:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:28.548 06:47:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:28.548 06:47:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:28.548 06:47:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:28.548 06:47:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:28.548 06:47:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:28.548 06:47:42 -- scripts/common.sh@335 -- # IFS=.-: 00:17:28.548 06:47:42 -- scripts/common.sh@335 -- # read -ra ver1 00:17:28.548 06:47:42 -- scripts/common.sh@336 -- # IFS=.-: 00:17:28.548 06:47:42 -- scripts/common.sh@336 -- # read -ra ver2 00:17:28.548 06:47:42 -- scripts/common.sh@337 -- # local 'op=<' 00:17:28.548 06:47:42 -- scripts/common.sh@339 -- # ver1_l=2 00:17:28.548 06:47:42 -- scripts/common.sh@340 -- # ver2_l=1 00:17:28.548 06:47:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:28.548 06:47:42 -- scripts/common.sh@343 -- # case "$op" in 00:17:28.548 06:47:42 -- scripts/common.sh@344 -- # : 1 00:17:28.548 06:47:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:28.548 06:47:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:28.548 06:47:42 -- scripts/common.sh@364 -- # decimal 1 00:17:28.548 06:47:42 -- scripts/common.sh@352 -- # local d=1 00:17:28.548 06:47:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:28.548 06:47:42 -- scripts/common.sh@354 -- # echo 1 00:17:28.548 06:47:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:28.548 06:47:42 -- scripts/common.sh@365 -- # decimal 2 00:17:28.548 06:47:42 -- scripts/common.sh@352 -- # local d=2 00:17:28.548 06:47:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:28.548 06:47:42 -- scripts/common.sh@354 -- # echo 2 00:17:28.548 06:47:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:28.548 06:47:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:28.548 06:47:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:28.548 06:47:42 -- scripts/common.sh@367 -- # return 0 00:17:28.548 06:47:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:28.548 06:47:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:28.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.548 --rc genhtml_branch_coverage=1 00:17:28.548 --rc genhtml_function_coverage=1 00:17:28.548 --rc genhtml_legend=1 00:17:28.548 --rc geninfo_all_blocks=1 00:17:28.548 --rc geninfo_unexecuted_blocks=1 00:17:28.548 00:17:28.548 ' 00:17:28.548 06:47:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:28.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.548 --rc genhtml_branch_coverage=1 00:17:28.548 --rc genhtml_function_coverage=1 00:17:28.548 --rc genhtml_legend=1 00:17:28.548 --rc geninfo_all_blocks=1 00:17:28.548 --rc geninfo_unexecuted_blocks=1 00:17:28.548 00:17:28.548 ' 00:17:28.548 06:47:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:28.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.548 --rc genhtml_branch_coverage=1 00:17:28.548 --rc genhtml_function_coverage=1 00:17:28.548 --rc genhtml_legend=1 00:17:28.548 --rc geninfo_all_blocks=1 00:17:28.548 --rc geninfo_unexecuted_blocks=1 00:17:28.548 00:17:28.548 ' 00:17:28.548 06:47:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:28.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.548 --rc genhtml_branch_coverage=1 00:17:28.548 --rc genhtml_function_coverage=1 00:17:28.548 --rc genhtml_legend=1 00:17:28.548 --rc geninfo_all_blocks=1 00:17:28.548 --rc geninfo_unexecuted_blocks=1 00:17:28.548 00:17:28.548 ' 00:17:28.548 06:47:42 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:28.548 06:47:42 -- nvmf/common.sh@7 -- # uname -s 00:17:28.548 06:47:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.548 06:47:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.548 06:47:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.548 06:47:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.548 06:47:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.548 06:47:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.548 06:47:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.548 06:47:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.548 06:47:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.548 06:47:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.548 06:47:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 00:17:28.548 06:47:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=1897a557-42a7-4044-982a-fbab8b2b3e32 00:17:28.548 06:47:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.548 06:47:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.548 06:47:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:28.548 06:47:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:28.548 06:47:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.548 06:47:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.548 06:47:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.548 06:47:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.548 06:47:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.548 06:47:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.548 06:47:42 -- paths/export.sh@5 -- # export PATH 00:17:28.548 06:47:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.548 06:47:42 -- nvmf/common.sh@46 -- # : 0 00:17:28.548 06:47:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:28.548 06:47:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:28.548 06:47:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:28.548 06:47:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.548 06:47:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.548 06:47:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:28.548 06:47:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:28.548 06:47:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:28.548 06:47:42 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:28.548 06:47:42 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:28.548 06:47:42 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:28.548 06:47:42 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:28.548 06:47:42 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:28.548 06:47:42 -- host/timeout.sh@19 -- # nvmftestinit 00:17:28.548 06:47:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:28.548 06:47:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.548 06:47:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:28.548 06:47:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:28.548 06:47:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:28.548 06:47:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.548 06:47:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.548 06:47:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.548 06:47:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:28.548 06:47:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:28.548 06:47:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:28.548 06:47:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:28.548 06:47:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:28.548 06:47:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:28.548 06:47:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:28.548 06:47:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:28.548 06:47:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:28.548 06:47:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:28.548 06:47:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:28.548 06:47:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:28.548 06:47:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:28.548 06:47:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:28.548 06:47:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:28.548 06:47:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:28.548 06:47:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:28.548 06:47:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:28.548 06:47:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:28.548 06:47:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:28.548 Cannot find device "nvmf_tgt_br" 00:17:28.548 06:47:42 -- nvmf/common.sh@154 -- # true 00:17:28.548 06:47:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:28.548 Cannot find device "nvmf_tgt_br2" 00:17:28.548 06:47:42 -- nvmf/common.sh@155 -- # true 00:17:28.548 06:47:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:28.548 06:47:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:28.548 Cannot find device "nvmf_tgt_br" 00:17:28.548 06:47:42 -- nvmf/common.sh@157 -- # true 00:17:28.548 06:47:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:28.548 Cannot find device "nvmf_tgt_br2" 00:17:28.548 06:47:42 -- nvmf/common.sh@158 -- # true 00:17:28.548 06:47:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:28.548 06:47:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:28.548 06:47:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:28.548 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:28.549 06:47:42 -- nvmf/common.sh@161 -- # true 00:17:28.549 06:47:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:28.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:28.549 06:47:42 -- nvmf/common.sh@162 -- # true 00:17:28.549 06:47:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:28.549 06:47:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:28.807 06:47:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:28.807 06:47:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:28.807 06:47:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:28.807 06:47:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:28.807 06:47:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:28.807 06:47:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:28.807 06:47:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:28.807 06:47:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:28.807 06:47:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:28.807 06:47:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:28.807 06:47:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:28.807 06:47:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:28.807 06:47:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:28.807 06:47:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:28.807 06:47:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:28.807 06:47:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:28.807 06:47:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:28.807 06:47:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:28.807 06:47:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:28.807 06:47:42 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:28.807 06:47:42 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:28.807 06:47:42 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:28.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:28.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:17:28.807 00:17:28.807 --- 10.0.0.2 ping statistics --- 00:17:28.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.807 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:28.807 06:47:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:28.807 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:28.807 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:17:28.807 00:17:28.807 --- 10.0.0.3 ping statistics --- 00:17:28.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.807 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:17:28.807 06:47:42 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:28.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:28.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:17:28.807 00:17:28.807 --- 10.0.0.1 ping statistics --- 00:17:28.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.807 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:17:28.807 06:47:42 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:28.807 06:47:42 -- nvmf/common.sh@421 -- # return 0 00:17:28.807 06:47:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:28.807 06:47:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:28.807 06:47:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:28.807 06:47:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:28.807 06:47:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:28.807 06:47:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:28.807 06:47:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:28.807 06:47:42 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:17:28.807 06:47:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:28.807 06:47:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:28.807 06:47:42 -- common/autotest_common.sh@10 -- # set +x 00:17:28.807 06:47:42 -- nvmf/common.sh@469 -- # nvmfpid=73572 00:17:28.807 06:47:42 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:28.807 06:47:42 -- nvmf/common.sh@470 -- # waitforlisten 73572 00:17:28.807 06:47:42 -- common/autotest_common.sh@829 -- # '[' -z 73572 ']' 00:17:28.807 06:47:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.807 06:47:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:28.807 06:47:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.807 06:47:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:28.807 06:47:42 -- common/autotest_common.sh@10 -- # set +x 00:17:28.807 [2024-12-14 06:47:42.789786] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:28.807 [2024-12-14 06:47:42.789947] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.065 [2024-12-14 06:47:42.925242] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:29.065 [2024-12-14 06:47:43.000228] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:29.065 [2024-12-14 06:47:43.000409] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:29.065 [2024-12-14 06:47:43.000427] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:29.065 [2024-12-14 06:47:43.000439] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:29.065 [2024-12-14 06:47:43.000640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.065 [2024-12-14 06:47:43.000660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.998 06:47:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:29.998 06:47:43 -- common/autotest_common.sh@862 -- # return 0 00:17:29.998 06:47:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:29.998 06:47:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:29.998 06:47:43 -- common/autotest_common.sh@10 -- # set +x 00:17:29.998 06:47:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:29.998 06:47:43 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:29.998 06:47:43 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:30.256 [2024-12-14 06:47:43.994602] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.256 06:47:44 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:30.514 Malloc0 00:17:30.514 06:47:44 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:30.773 06:47:44 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:31.031 06:47:44 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.294 [2024-12-14 06:47:45.095574] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.295 06:47:45 -- host/timeout.sh@32 -- # bdevperf_pid=73627 00:17:31.295 06:47:45 -- host/timeout.sh@34 -- # waitforlisten 73627 /var/tmp/bdevperf.sock 00:17:31.295 06:47:45 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:17:31.295 06:47:45 -- common/autotest_common.sh@829 -- # '[' -z 73627 ']' 00:17:31.295 06:47:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:31.295 06:47:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:31.295 06:47:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:31.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:31.295 06:47:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:31.295 06:47:45 -- common/autotest_common.sh@10 -- # set +x 00:17:31.295 [2024-12-14 06:47:45.158663] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:31.295 [2024-12-14 06:47:45.158770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73627 ] 00:17:31.553 [2024-12-14 06:47:45.294555] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.553 [2024-12-14 06:47:45.353666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:32.486 06:47:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:32.486 06:47:46 -- common/autotest_common.sh@862 -- # return 0 00:17:32.486 06:47:46 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:32.486 06:47:46 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:17:32.744 NVMe0n1 00:17:33.001 06:47:46 -- host/timeout.sh@51 -- # rpc_pid=73650 00:17:33.001 06:47:46 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:33.001 06:47:46 -- host/timeout.sh@53 -- # sleep 1 00:17:33.001 Running I/O for 10 seconds... 00:17:33.935 06:47:47 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:34.197 [2024-12-14 06:47:48.011281] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.197 [2024-12-14 06:47:48.011344] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.197 [2024-12-14 06:47:48.011354] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.197 [2024-12-14 06:47:48.011362] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.197 [2024-12-14 06:47:48.011370] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.197 [2024-12-14 06:47:48.011377] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.197 [2024-12-14 06:47:48.011385] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.197 [2024-12-14 06:47:48.011392] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.197 [2024-12-14 06:47:48.011399] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.197 [2024-12-14 06:47:48.011407] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.197 [2024-12-14 06:47:48.011415] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.197 [2024-12-14 06:47:48.011422] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.197 [2024-12-14 06:47:48.011430] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.197 [2024-12-14 06:47:48.011437] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.197 [2024-12-14 06:47:48.011444] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.197 [2024-12-14 06:47:48.011452] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.197 [2024-12-14 06:47:48.011459] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.197 [2024-12-14 06:47:48.011467] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.197 [2024-12-14 06:47:48.011474] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.197 [2024-12-14 06:47:48.011482] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.197 [2024-12-14 06:47:48.011489] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.197 [2024-12-14 06:47:48.011496] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.197 [2024-12-14 06:47:48.011503] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.197 [2024-12-14 06:47:48.011511] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.197 [2024-12-14 06:47:48.011518] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.197 [2024-12-14 06:47:48.011525] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.197 [2024-12-14 06:47:48.011533] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.197 [2024-12-14 06:47:48.011540] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.198 [2024-12-14 06:47:48.011555] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.198 [2024-12-14 06:47:48.011563] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.198 [2024-12-14 06:47:48.011570] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.198 [2024-12-14 06:47:48.011578] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.198 [2024-12-14 06:47:48.011586] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.198 [2024-12-14 06:47:48.011594] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.198 [2024-12-14 06:47:48.011602] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77c480 is same with the state(5) to be set 00:17:34.198 [2024-12-14 06:47:48.011653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.198 [2024-12-14 06:47:48.011683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.011704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.198 [2024-12-14 06:47:48.011714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.011725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.198 [2024-12-14 06:47:48.011733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.011743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.198 [2024-12-14 06:47:48.011751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.011762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:126456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.198 [2024-12-14 06:47:48.011771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.011780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.198 [2024-12-14 06:47:48.011789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.011798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.198 [2024-12-14 06:47:48.011807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.011817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.198 [2024-12-14 06:47:48.011825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.011835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:127024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.198 [2024-12-14 06:47:48.011843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.011855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:127032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.198 [2024-12-14 06:47:48.011864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.011874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:127072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.198 [2024-12-14 06:47:48.011882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.011892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:127096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.198 [2024-12-14 06:47:48.011900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.011910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:127104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.198 [2024-12-14 06:47:48.011935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.011945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:127112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.198 [2024-12-14 06:47:48.011968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.011981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:127120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.198 [2024-12-14 06:47:48.011990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.012017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:127136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.198 [2024-12-14 06:47:48.012026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.012037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:127144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.198 [2024-12-14 06:47:48.012046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.012057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.198 [2024-12-14 06:47:48.012066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.012077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.198 [2024-12-14 06:47:48.012086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.012096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.198 [2024-12-14 06:47:48.012105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.012115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.198 [2024-12-14 06:47:48.012124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.012135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.198 [2024-12-14 06:47:48.012145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.012156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.198 [2024-12-14 06:47:48.012165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.012175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.198 [2024-12-14 06:47:48.012184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.012195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.198 [2024-12-14 06:47:48.012204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.012214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:127152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.198 [2024-12-14 06:47:48.012223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.012234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:127160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.198 [2024-12-14 06:47:48.012243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.012253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:127168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.198 [2024-12-14 06:47:48.012262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.012272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:127176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.198 [2024-12-14 06:47:48.012281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.012291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:127184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.198 [2024-12-14 06:47:48.012300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.012325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:127192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.198 [2024-12-14 06:47:48.012334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.012344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:127200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.198 [2024-12-14 06:47:48.012352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.012362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:127208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.198 [2024-12-14 06:47:48.012372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.012382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.198 [2024-12-14 06:47:48.012391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.012402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:127224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.198 [2024-12-14 06:47:48.012410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.012437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:127232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.198 [2024-12-14 06:47:48.012445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.198 [2024-12-14 06:47:48.012455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:127240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.199 [2024-12-14 06:47:48.012465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.012475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:127248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.199 [2024-12-14 06:47:48.012483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.012493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:127256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.199 [2024-12-14 06:47:48.012501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.012511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:127264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.199 [2024-12-14 06:47:48.012520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.012529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:127272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.199 [2024-12-14 06:47:48.012538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.012548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:127280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.199 [2024-12-14 06:47:48.012556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.012566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:127288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.199 [2024-12-14 06:47:48.012574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.012585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:127296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.199 [2024-12-14 06:47:48.012593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.012603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:127304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.199 [2024-12-14 06:47:48.012611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.012621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:127312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.199 [2024-12-14 06:47:48.012629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.012639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:126688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.199 [2024-12-14 06:47:48.012648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.012659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.199 [2024-12-14 06:47:48.012667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.012677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.199 [2024-12-14 06:47:48.012685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.012695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.199 [2024-12-14 06:47:48.012703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.012713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.199 [2024-12-14 06:47:48.012721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.012731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:126752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.199 [2024-12-14 06:47:48.012739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.012749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.199 [2024-12-14 06:47:48.012757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.012767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.199 [2024-12-14 06:47:48.012775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.012785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:127320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.199 [2024-12-14 06:47:48.012793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.012803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.199 [2024-12-14 06:47:48.012811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.012821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:127336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.199 [2024-12-14 06:47:48.012830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.012839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:127344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.199 [2024-12-14 06:47:48.012848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.012857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:127352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.199 [2024-12-14 06:47:48.012866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.012876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:127360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.199 [2024-12-14 06:47:48.012885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.012894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:127368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.199 [2024-12-14 06:47:48.012903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.012912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:127376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.199 [2024-12-14 06:47:48.012921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.012942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:127384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.199 [2024-12-14 06:47:48.012951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.012962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:127392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.199 [2024-12-14 06:47:48.012970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.012998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:127400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.199 [2024-12-14 06:47:48.013006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.013017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.199 [2024-12-14 06:47:48.013025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.013044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:127416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.199 [2024-12-14 06:47:48.013052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.013063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:127424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.199 [2024-12-14 06:47:48.013071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.013081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:127432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.199 [2024-12-14 06:47:48.013090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.013100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:127440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.199 [2024-12-14 06:47:48.013109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.013119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:127448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.199 [2024-12-14 06:47:48.013128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.013138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.199 [2024-12-14 06:47:48.013148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.013158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.199 [2024-12-14 06:47:48.013166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.013177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.199 [2024-12-14 06:47:48.013185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.013195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.199 [2024-12-14 06:47:48.013204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.013214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.199 [2024-12-14 06:47:48.013223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.199 [2024-12-14 06:47:48.013233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:126912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.199 [2024-12-14 06:47:48.013241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.200 [2024-12-14 06:47:48.013260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:126976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.200 [2024-12-14 06:47:48.013279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:127456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.200 [2024-12-14 06:47:48.013298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:127464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.200 [2024-12-14 06:47:48.013317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.200 [2024-12-14 06:47:48.013336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:127480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.200 [2024-12-14 06:47:48.013355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:127488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.200 [2024-12-14 06:47:48.013374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:127496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.200 [2024-12-14 06:47:48.013393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:127504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.200 [2024-12-14 06:47:48.013412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:127512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.200 [2024-12-14 06:47:48.013431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:127520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.200 [2024-12-14 06:47:48.013449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:127528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.200 [2024-12-14 06:47:48.013468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:127536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.200 [2024-12-14 06:47:48.013486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:127544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.200 [2024-12-14 06:47:48.013505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:127552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.200 [2024-12-14 06:47:48.013524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:127560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.200 [2024-12-14 06:47:48.013543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:127568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.200 [2024-12-14 06:47:48.013562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:127576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.200 [2024-12-14 06:47:48.013580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:127584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.200 [2024-12-14 06:47:48.013603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:127592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.200 [2024-12-14 06:47:48.013623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:127600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.200 [2024-12-14 06:47:48.013644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:127608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.200 [2024-12-14 06:47:48.013671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:127616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.200 [2024-12-14 06:47:48.013690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:127624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.200 [2024-12-14 06:47:48.013708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:127632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.200 [2024-12-14 06:47:48.013727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:127640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.200 [2024-12-14 06:47:48.013746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:127648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.200 [2024-12-14 06:47:48.013765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:127656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.200 [2024-12-14 06:47:48.013784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.200 [2024-12-14 06:47:48.013802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:127000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.200 [2024-12-14 06:47:48.013821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:127008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.200 [2024-12-14 06:47:48.013839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:127016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.200 [2024-12-14 06:47:48.013857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:127040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.200 [2024-12-14 06:47:48.013877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:127048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.200 [2024-12-14 06:47:48.013904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:127056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.200 [2024-12-14 06:47:48.013928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:127064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.200 [2024-12-14 06:47:48.013947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:127080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.200 [2024-12-14 06:47:48.013967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:127672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.200 [2024-12-14 06:47:48.013986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.013996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:127680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.200 [2024-12-14 06:47:48.014005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.014015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:127688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.200 [2024-12-14 06:47:48.014024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.200 [2024-12-14 06:47:48.014034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:127696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.201 [2024-12-14 06:47:48.014042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.201 [2024-12-14 06:47:48.014052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:127704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.201 [2024-12-14 06:47:48.014061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.201 [2024-12-14 06:47:48.014071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:127712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.201 [2024-12-14 06:47:48.014079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.201 [2024-12-14 06:47:48.014090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:127720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.201 [2024-12-14 06:47:48.014098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.201 [2024-12-14 06:47:48.014108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:127728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.201 [2024-12-14 06:47:48.014118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.201 [2024-12-14 06:47:48.014128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:127736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.201 [2024-12-14 06:47:48.014137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.201 [2024-12-14 06:47:48.014147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:127744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:34.201 [2024-12-14 06:47:48.014156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.201 [2024-12-14 06:47:48.014166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:127752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.201 [2024-12-14 06:47:48.014175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.201 [2024-12-14 06:47:48.014185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:127760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.201 [2024-12-14 06:47:48.014193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.201 [2024-12-14 06:47:48.014204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:127088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.201 [2024-12-14 06:47:48.014212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.201 [2024-12-14 06:47:48.014224] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb350c0 is same with the state(5) to be set 00:17:34.201 [2024-12-14 06:47:48.014235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:34.201 [2024-12-14 06:47:48.014243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:34.201 [2024-12-14 06:47:48.014250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127128 len:8 PRP1 0x0 PRP2 0x0 00:17:34.201 [2024-12-14 06:47:48.014258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:34.201 [2024-12-14 06:47:48.014301] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb350c0 was disconnected and freed. reset controller. 00:17:34.201 [2024-12-14 06:47:48.014532] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:34.201 [2024-12-14 06:47:48.014619] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad2010 (9): Bad file descriptor 00:17:34.201 [2024-12-14 06:47:48.014717] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:34.201 [2024-12-14 06:47:48.014778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:34.201 [2024-12-14 06:47:48.014819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:34.201 [2024-12-14 06:47:48.014835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad2010 with addr=10.0.0.2, port=4420 00:17:34.201 [2024-12-14 06:47:48.014845] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad2010 is same with the state(5) to be set 00:17:34.201 [2024-12-14 06:47:48.014863] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad2010 (9): Bad file descriptor 00:17:34.201 [2024-12-14 06:47:48.014879] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:34.201 [2024-12-14 06:47:48.014887] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:34.201 [2024-12-14 06:47:48.014956] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:34.201 [2024-12-14 06:47:48.014979] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:34.201 [2024-12-14 06:47:48.014990] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:34.201 06:47:48 -- host/timeout.sh@56 -- # sleep 2 00:17:36.149 [2024-12-14 06:47:50.015102] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:36.149 [2024-12-14 06:47:50.015198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:36.149 [2024-12-14 06:47:50.015244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:36.149 [2024-12-14 06:47:50.015262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad2010 with addr=10.0.0.2, port=4420 00:17:36.149 [2024-12-14 06:47:50.015276] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad2010 is same with the state(5) to be set 00:17:36.149 [2024-12-14 06:47:50.015302] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad2010 (9): Bad file descriptor 00:17:36.149 [2024-12-14 06:47:50.015333] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:36.149 [2024-12-14 06:47:50.015345] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:36.149 [2024-12-14 06:47:50.015356] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:36.149 [2024-12-14 06:47:50.015383] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:36.149 [2024-12-14 06:47:50.015395] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:36.149 06:47:50 -- host/timeout.sh@57 -- # get_controller 00:17:36.149 06:47:50 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:36.149 06:47:50 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:17:36.407 06:47:50 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:17:36.407 06:47:50 -- host/timeout.sh@58 -- # get_bdev 00:17:36.407 06:47:50 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:17:36.407 06:47:50 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:17:36.665 06:47:50 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:17:36.665 06:47:50 -- host/timeout.sh@61 -- # sleep 5 00:17:38.039 [2024-12-14 06:47:52.015557] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.039 [2024-12-14 06:47:52.015654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.039 [2024-12-14 06:47:52.015697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:38.039 [2024-12-14 06:47:52.015713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad2010 with addr=10.0.0.2, port=4420 00:17:38.039 [2024-12-14 06:47:52.015725] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad2010 is same with the state(5) to be set 00:17:38.039 [2024-12-14 06:47:52.015748] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad2010 (9): Bad file descriptor 00:17:38.039 [2024-12-14 06:47:52.015765] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:38.039 [2024-12-14 06:47:52.015774] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:38.039 [2024-12-14 06:47:52.015783] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:38.039 [2024-12-14 06:47:52.015808] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:38.039 [2024-12-14 06:47:52.015819] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:40.566 [2024-12-14 06:47:54.015887] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:40.566 [2024-12-14 06:47:54.016032] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:40.566 [2024-12-14 06:47:54.016044] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:40.566 [2024-12-14 06:47:54.016054] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:17:40.566 [2024-12-14 06:47:54.016084] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:41.132 00:17:41.132 Latency(us) 00:17:41.132 [2024-12-14T06:47:55.124Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.132 [2024-12-14T06:47:55.124Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:41.132 Verification LBA range: start 0x0 length 0x4000 00:17:41.132 NVMe0n1 : 8.17 1938.43 7.57 15.66 0.00 65420.26 3202.33 7015926.69 00:17:41.132 [2024-12-14T06:47:55.124Z] =================================================================================================================== 00:17:41.132 [2024-12-14T06:47:55.124Z] Total : 1938.43 7.57 15.66 0.00 65420.26 3202.33 7015926.69 00:17:41.132 0 00:17:41.695 06:47:55 -- host/timeout.sh@62 -- # get_controller 00:17:41.695 06:47:55 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:41.695 06:47:55 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:17:41.953 06:47:55 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:17:41.953 06:47:55 -- host/timeout.sh@63 -- # get_bdev 00:17:41.953 06:47:55 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:17:41.953 06:47:55 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:17:42.211 06:47:56 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:17:42.211 06:47:56 -- host/timeout.sh@65 -- # wait 73650 00:17:42.211 06:47:56 -- host/timeout.sh@67 -- # killprocess 73627 00:17:42.211 06:47:56 -- common/autotest_common.sh@936 -- # '[' -z 73627 ']' 00:17:42.211 06:47:56 -- common/autotest_common.sh@940 -- # kill -0 73627 00:17:42.211 06:47:56 -- common/autotest_common.sh@941 -- # uname 00:17:42.211 06:47:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:42.211 06:47:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73627 00:17:42.211 06:47:56 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:42.211 06:47:56 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:42.211 killing process with pid 73627 00:17:42.211 06:47:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73627' 00:17:42.211 06:47:56 -- common/autotest_common.sh@955 -- # kill 73627 00:17:42.211 06:47:56 -- common/autotest_common.sh@960 -- # wait 73627 00:17:42.211 Received shutdown signal, test time was about 9.301509 seconds 00:17:42.211 00:17:42.212 Latency(us) 00:17:42.212 [2024-12-14T06:47:56.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.212 [2024-12-14T06:47:56.204Z] =================================================================================================================== 00:17:42.212 [2024-12-14T06:47:56.204Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:42.469 06:47:56 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:42.728 [2024-12-14 06:47:56.535909] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:42.728 06:47:56 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:17:42.728 06:47:56 -- host/timeout.sh@74 -- # bdevperf_pid=73773 00:17:42.728 06:47:56 -- host/timeout.sh@76 -- # waitforlisten 73773 /var/tmp/bdevperf.sock 00:17:42.728 06:47:56 -- common/autotest_common.sh@829 -- # '[' -z 73773 ']' 00:17:42.728 06:47:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:42.728 06:47:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:42.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:42.728 06:47:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:42.728 06:47:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:42.728 06:47:56 -- common/autotest_common.sh@10 -- # set +x 00:17:42.728 [2024-12-14 06:47:56.591652] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:42.728 [2024-12-14 06:47:56.591736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73773 ] 00:17:42.986 [2024-12-14 06:47:56.719657] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.986 [2024-12-14 06:47:56.773714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.920 06:47:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:43.920 06:47:57 -- common/autotest_common.sh@862 -- # return 0 00:17:43.920 06:47:57 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:43.920 06:47:57 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:17:44.178 NVMe0n1 00:17:44.178 06:47:58 -- host/timeout.sh@84 -- # rpc_pid=73794 00:17:44.178 06:47:58 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:44.178 06:47:58 -- host/timeout.sh@86 -- # sleep 1 00:17:44.436 Running I/O for 10 seconds... 00:17:45.372 06:47:59 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:45.372 [2024-12-14 06:47:59.353718] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc7b0 is same with the state(5) to be set 00:17:45.372 [2024-12-14 06:47:59.353769] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc7b0 is same with the state(5) to be set 00:17:45.373 [2024-12-14 06:47:59.353763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:45.373 [2024-12-14 06:47:59.353780] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc7b0 is same with the state(5) to be set 00:17:45.373 [2024-12-14 06:47:59.353789] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc7b0 is same with the state(5) to be set 00:17:45.373 [2024-12-14 06:47:59.353796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.353798] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc7b0 is same with the state(5) to be set 00:17:45.373 [2024-12-14 06:47:59.353808] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc7b0 is same with the state(5) to be set 00:17:45.373 [2024-12-14 06:47:59.353809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:45.373 [2024-12-14 06:47:59.353817] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc7b0 is same with the state(5) to be set 00:17:45.373 [2024-12-14 06:47:59.353819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.353826] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc7b0 is same with the state(5) to be set 00:17:45.373 [2024-12-14 06:47:59.353830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:45.373 [2024-12-14 06:47:59.353835] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc7b0 is same with the state(5) to be set 00:17:45.373 [2024-12-14 06:47:59.353840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.353844] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc7b0 is same with the state(5) to be set 00:17:45.373 [2024-12-14 06:47:59.353850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:45.373 [2024-12-14 06:47:59.353853] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc7b0 is same with the state(5) to be set 00:17:45.373 [2024-12-14 06:47:59.353861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.353862] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc7b0 is same with the state(5) to be set 00:17:45.373 [2024-12-14 06:47:59.353871] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc7b0 is same with t[2024-12-14 06:47:59.353871] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e010 is same he state(5) to be set 00:17:45.373 with the state(5) to be set 00:17:45.373 [2024-12-14 06:47:59.353893] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5dc7b0 is same with the state(5) to be set 00:17:45.373 [2024-12-14 06:47:59.353959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:123552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.373 [2024-12-14 06:47:59.353992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.373 [2024-12-14 06:47:59.354022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:123576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.373 [2024-12-14 06:47:59.354044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:123592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.373 [2024-12-14 06:47:59.354064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.373 [2024-12-14 06:47:59.354084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.373 [2024-12-14 06:47:59.354104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:122992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.373 [2024-12-14 06:47:59.354124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:123000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.373 [2024-12-14 06:47:59.354144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:123016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.373 [2024-12-14 06:47:59.354165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:123024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.373 [2024-12-14 06:47:59.354186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:123040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.373 [2024-12-14 06:47:59.354206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:123048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.373 [2024-12-14 06:47:59.354226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:123632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.373 [2024-12-14 06:47:59.354247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:123640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.373 [2024-12-14 06:47:59.354283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.373 [2024-12-14 06:47:59.354303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:123664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.373 [2024-12-14 06:47:59.354323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:123672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.373 [2024-12-14 06:47:59.354342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:123680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.373 [2024-12-14 06:47:59.354361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:123688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.373 [2024-12-14 06:47:59.354380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:123696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.373 [2024-12-14 06:47:59.354399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:123704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.373 [2024-12-14 06:47:59.354418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:123712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.373 [2024-12-14 06:47:59.354437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:123720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.373 [2024-12-14 06:47:59.354457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:123728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.373 [2024-12-14 06:47:59.354476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:123736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.373 [2024-12-14 06:47:59.354495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:123744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.373 [2024-12-14 06:47:59.354531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:123752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.373 [2024-12-14 06:47:59.354551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:123080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.373 [2024-12-14 06:47:59.354589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:123088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.373 [2024-12-14 06:47:59.354610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:123104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.373 [2024-12-14 06:47:59.354631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:123144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.373 [2024-12-14 06:47:59.354651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:123152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.373 [2024-12-14 06:47:59.354672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:123160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.373 [2024-12-14 06:47:59.354693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:123168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.373 [2024-12-14 06:47:59.354713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:123176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.373 [2024-12-14 06:47:59.354733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:123760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.373 [2024-12-14 06:47:59.354754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:123768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.373 [2024-12-14 06:47:59.354774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:123776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.373 [2024-12-14 06:47:59.354794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:123784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.373 [2024-12-14 06:47:59.354815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.373 [2024-12-14 06:47:59.354826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:123792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.354836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.354848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:123800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.354857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.354869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:123808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.354878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.354889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:123816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.374 [2024-12-14 06:47:59.354899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.354933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:123824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.374 [2024-12-14 06:47:59.354945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.354957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:123832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.374 [2024-12-14 06:47:59.354967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.354978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:123840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.374 [2024-12-14 06:47:59.354987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.354999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:123848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.355008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:123856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.374 [2024-12-14 06:47:59.355029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:123864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.374 [2024-12-14 06:47:59.355049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:123872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.355070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:123880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.374 [2024-12-14 06:47:59.355090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:123200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.355111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:123248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.355132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:123272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.355153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:123280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.355174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:123288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.355200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:123312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.355221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:123336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.355242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:123352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.355263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.355285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:123896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.374 [2024-12-14 06:47:59.355306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:123904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.374 [2024-12-14 06:47:59.355341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:123912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.355361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:123920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.374 [2024-12-14 06:47:59.355381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:123928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.374 [2024-12-14 06:47:59.355402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:123936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.374 [2024-12-14 06:47:59.355422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:123944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.374 [2024-12-14 06:47:59.355443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:123952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.355463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:123960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.355484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:123968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.374 [2024-12-14 06:47:59.355505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:123976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.355524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:123984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.355544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:123992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.374 [2024-12-14 06:47:59.355564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:124000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.355600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:124008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.355621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:124016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.374 [2024-12-14 06:47:59.355642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:124024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.355662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:124032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.355684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:124040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.374 [2024-12-14 06:47:59.355705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:124048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.374 [2024-12-14 06:47:59.355725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:123360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.355747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:123368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.355767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:123376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.355789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:123384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.355809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:123392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.355830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:123400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.355851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:123408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.355871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:123424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.355892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:124056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.355925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.374 [2024-12-14 06:47:59.355947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:124072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.374 [2024-12-14 06:47:59.355968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.355980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:124080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.355990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.356001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:124088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.356010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.356022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:124096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.356031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.356042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:124104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.374 [2024-12-14 06:47:59.356052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.374 [2024-12-14 06:47:59.356063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:124112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.374 [2024-12-14 06:47:59.356073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:124120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.375 [2024-12-14 06:47:59.356093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:124128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.375 [2024-12-14 06:47:59.356118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:124136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.375 [2024-12-14 06:47:59.356139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:123440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.375 [2024-12-14 06:47:59.356159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:123456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.375 [2024-12-14 06:47:59.356180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:123464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.375 [2024-12-14 06:47:59.356201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:123488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.375 [2024-12-14 06:47:59.356222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:123496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.375 [2024-12-14 06:47:59.356243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:123504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.375 [2024-12-14 06:47:59.356263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:123520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.375 [2024-12-14 06:47:59.356287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:123528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.375 [2024-12-14 06:47:59.356307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:124144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.375 [2024-12-14 06:47:59.356328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:124152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.375 [2024-12-14 06:47:59.356348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:124160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.375 [2024-12-14 06:47:59.356369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:124168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.375 [2024-12-14 06:47:59.356403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:124176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.375 [2024-12-14 06:47:59.356424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:124184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.375 [2024-12-14 06:47:59.356444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:124192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.375 [2024-12-14 06:47:59.356466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:124200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.375 [2024-12-14 06:47:59.356488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:124208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.375 [2024-12-14 06:47:59.356507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:124216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.375 [2024-12-14 06:47:59.356527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:124224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.375 [2024-12-14 06:47:59.356547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:124232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.375 [2024-12-14 06:47:59.356567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:124240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:45.375 [2024-12-14 06:47:59.356604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:123544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.375 [2024-12-14 06:47:59.356625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:123560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.375 [2024-12-14 06:47:59.356647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:123584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.375 [2024-12-14 06:47:59.356667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.375 [2024-12-14 06:47:59.356688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:123608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.375 [2024-12-14 06:47:59.356708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.375 [2024-12-14 06:47:59.356728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:123624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.375 [2024-12-14 06:47:59.356749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17710c0 is same with the state(5) to be set 00:17:45.375 [2024-12-14 06:47:59.356771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:45.375 [2024-12-14 06:47:59.356780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:45.375 [2024-12-14 06:47:59.356788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123656 len:8 PRP1 0x0 PRP2 0x0 00:17:45.375 [2024-12-14 06:47:59.356800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:45.375 [2024-12-14 06:47:59.356843] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17710c0 was disconnected and freed. reset controller. 00:17:45.375 [2024-12-14 06:47:59.357132] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:45.375 [2024-12-14 06:47:59.357158] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170e010 (9): Bad file descriptor 00:17:45.375 [2024-12-14 06:47:59.357266] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:45.375 [2024-12-14 06:47:59.357339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:45.375 [2024-12-14 06:47:59.357406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:45.375 [2024-12-14 06:47:59.357433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170e010 with addr=10.0.0.2, port=4420 00:17:45.375 [2024-12-14 06:47:59.357451] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e010 is same with the state(5) to be set 00:17:45.375 [2024-12-14 06:47:59.357481] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170e010 (9): Bad file descriptor 00:17:45.375 [2024-12-14 06:47:59.357507] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:45.375 [2024-12-14 06:47:59.357522] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:45.375 [2024-12-14 06:47:59.357538] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:45.375 [2024-12-14 06:47:59.357568] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:45.375 [2024-12-14 06:47:59.357587] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:45.633 06:47:59 -- host/timeout.sh@90 -- # sleep 1 00:17:46.566 [2024-12-14 06:48:00.357728] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:46.566 [2024-12-14 06:48:00.357863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:46.566 [2024-12-14 06:48:00.357921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:46.566 [2024-12-14 06:48:00.357940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170e010 with addr=10.0.0.2, port=4420 00:17:46.566 [2024-12-14 06:48:00.357954] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e010 is same with the state(5) to be set 00:17:46.566 [2024-12-14 06:48:00.357982] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170e010 (9): Bad file descriptor 00:17:46.566 [2024-12-14 06:48:00.358002] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:46.566 [2024-12-14 06:48:00.358012] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:46.566 [2024-12-14 06:48:00.358022] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:46.566 [2024-12-14 06:48:00.358063] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:46.566 [2024-12-14 06:48:00.358360] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:46.566 06:48:00 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:46.824 [2024-12-14 06:48:00.632033] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:46.824 06:48:00 -- host/timeout.sh@92 -- # wait 73794 00:17:47.390 [2024-12-14 06:48:01.374271] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:55.495 00:17:55.495 Latency(us) 00:17:55.495 [2024-12-14T06:48:09.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.495 [2024-12-14T06:48:09.487Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:55.495 Verification LBA range: start 0x0 length 0x4000 00:17:55.495 NVMe0n1 : 10.01 9711.74 37.94 0.00 0.00 13158.44 893.67 3019898.88 00:17:55.495 [2024-12-14T06:48:09.487Z] =================================================================================================================== 00:17:55.495 [2024-12-14T06:48:09.487Z] Total : 9711.74 37.94 0.00 0.00 13158.44 893.67 3019898.88 00:17:55.495 0 00:17:55.495 06:48:08 -- host/timeout.sh@97 -- # rpc_pid=73903 00:17:55.495 06:48:08 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:55.495 06:48:08 -- host/timeout.sh@98 -- # sleep 1 00:17:55.495 Running I/O for 10 seconds... 00:17:55.495 06:48:09 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:55.755 [2024-12-14 06:48:09.504990] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db4a0 is same with the state(5) to be set 00:17:55.755 [2024-12-14 06:48:09.505060] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db4a0 is same with the state(5) to be set 00:17:55.755 [2024-12-14 06:48:09.505087] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db4a0 is same with the state(5) to be set 00:17:55.755 [2024-12-14 06:48:09.505096] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db4a0 is same with the state(5) to be set 00:17:55.755 [2024-12-14 06:48:09.505104] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db4a0 is same with the state(5) to be set 00:17:55.755 [2024-12-14 06:48:09.505111] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db4a0 is same with the state(5) to be set 00:17:55.755 [2024-12-14 06:48:09.505119] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db4a0 is same with the state(5) to be set 00:17:55.755 [2024-12-14 06:48:09.505126] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db4a0 is same with the state(5) to be set 00:17:55.755 [2024-12-14 06:48:09.505134] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db4a0 is same with the state(5) to be set 00:17:55.755 [2024-12-14 06:48:09.505193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:124048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-12-14 06:48:09.505224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.755 [2024-12-14 06:48:09.505245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:124064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-12-14 06:48:09.505255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.755 [2024-12-14 06:48:09.505266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:124072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-12-14 06:48:09.505275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.755 [2024-12-14 06:48:09.505285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-12-14 06:48:09.505293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.755 [2024-12-14 06:48:09.505304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:124120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-12-14 06:48:09.505312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.755 [2024-12-14 06:48:09.505323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:124136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-12-14 06:48:09.505331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.755 [2024-12-14 06:48:09.505358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-12-14 06:48:09.505723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.755 [2024-12-14 06:48:09.505752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:124160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-12-14 06:48:09.505763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.755 [2024-12-14 06:48:09.505777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:124704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-12-14 06:48:09.505786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.755 [2024-12-14 06:48:09.505798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:124712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-12-14 06:48:09.505807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.755 [2024-12-14 06:48:09.505819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:124720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.755 [2024-12-14 06:48:09.505829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.755 [2024-12-14 06:48:09.505855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:124728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.755 [2024-12-14 06:48:09.505863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.755 [2024-12-14 06:48:09.505874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-12-14 06:48:09.505883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.755 [2024-12-14 06:48:09.506204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:124744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.755 [2024-12-14 06:48:09.506453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.755 [2024-12-14 06:48:09.506480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:124752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-12-14 06:48:09.506491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.755 [2024-12-14 06:48:09.506503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:124760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-12-14 06:48:09.506512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.755 [2024-12-14 06:48:09.506524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:124768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-12-14 06:48:09.506535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.755 [2024-12-14 06:48:09.506547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:124776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.755 [2024-12-14 06:48:09.506556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.755 [2024-12-14 06:48:09.506567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:124784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-12-14 06:48:09.506577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.755 [2024-12-14 06:48:09.506588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:124792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.755 [2024-12-14 06:48:09.506597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.755 [2024-12-14 06:48:09.507010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:124800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-12-14 06:48:09.507035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.755 [2024-12-14 06:48:09.507049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:124808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-12-14 06:48:09.507058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.755 [2024-12-14 06:48:09.507070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:124816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.755 [2024-12-14 06:48:09.507080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.507091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:124824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.756 [2024-12-14 06:48:09.507100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.507112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:124832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-12-14 06:48:09.507121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.507132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:124840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.756 [2024-12-14 06:48:09.507141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.507153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:124848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-12-14 06:48:09.507163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.507279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:124856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-12-14 06:48:09.507295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.507307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.756 [2024-12-14 06:48:09.507316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.507329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:124168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-12-14 06:48:09.507590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.507607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:124176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-12-14 06:48:09.507617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.507628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:124192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-12-14 06:48:09.507638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.507650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:124200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-12-14 06:48:09.507659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.507672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:124208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-12-14 06:48:09.507681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.507692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:124240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-12-14 06:48:09.507702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.507713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:124248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-12-14 06:48:09.508092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.508119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:124264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-12-14 06:48:09.508130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.508142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:124872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-12-14 06:48:09.508152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.508164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:124880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-12-14 06:48:09.508173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.508184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:124888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.756 [2024-12-14 06:48:09.508194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.508205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:124896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-12-14 06:48:09.508214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.508225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:124904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-12-14 06:48:09.508234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.508593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:124912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-12-14 06:48:09.508607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.508619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:124920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-12-14 06:48:09.508629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.508640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:124928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-12-14 06:48:09.508650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.508794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:124936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.756 [2024-12-14 06:48:09.508811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.508823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:124944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.756 [2024-12-14 06:48:09.508943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.508959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:124952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.756 [2024-12-14 06:48:09.509094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.509208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:124960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.756 [2024-12-14 06:48:09.509221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.509234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:124968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.756 [2024-12-14 06:48:09.509243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.509255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:124272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-12-14 06:48:09.509265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.509277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:124296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-12-14 06:48:09.509286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.509298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:124304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-12-14 06:48:09.509307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.509319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:124312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-12-14 06:48:09.509651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.509675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:124336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-12-14 06:48:09.509686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.509699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:124368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-12-14 06:48:09.509708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.509719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-12-14 06:48:09.509729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.509740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:124392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-12-14 06:48:09.509749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.509761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:124976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-12-14 06:48:09.509770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.509781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:124984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-12-14 06:48:09.510039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.510053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:124992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-12-14 06:48:09.510201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.510501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-12-14 06:48:09.510611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.510625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:125008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.756 [2024-12-14 06:48:09.510635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.756 [2024-12-14 06:48:09.510647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:125016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.757 [2024-12-14 06:48:09.510657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.510668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:125024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.757 [2024-12-14 06:48:09.510677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.510689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.757 [2024-12-14 06:48:09.510699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.510710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:125040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.757 [2024-12-14 06:48:09.510719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.511074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.757 [2024-12-14 06:48:09.511101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.511115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:125056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.757 [2024-12-14 06:48:09.511125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.511137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:125064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.757 [2024-12-14 06:48:09.511146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.511157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:125072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.757 [2024-12-14 06:48:09.511167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.511178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:125080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.757 [2024-12-14 06:48:09.511187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.511198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:125088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.757 [2024-12-14 06:48:09.511207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.511220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.757 [2024-12-14 06:48:09.511505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.511583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:124400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.757 [2024-12-14 06:48:09.511594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.511606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:124424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.757 [2024-12-14 06:48:09.511616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.511629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:124432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.757 [2024-12-14 06:48:09.511639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.511650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:124448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.757 [2024-12-14 06:48:09.511659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.511671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:124464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.757 [2024-12-14 06:48:09.511681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.511692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:124472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.757 [2024-12-14 06:48:09.512049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.512079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:124480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.757 [2024-12-14 06:48:09.512089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.512103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:124504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.757 [2024-12-14 06:48:09.512112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.512124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:125104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.757 [2024-12-14 06:48:09.512133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.512144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.757 [2024-12-14 06:48:09.512153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.512165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.757 [2024-12-14 06:48:09.512175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.512186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.757 [2024-12-14 06:48:09.512533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.512550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:125136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.757 [2024-12-14 06:48:09.512561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.512572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:125144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.757 [2024-12-14 06:48:09.512582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.512593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:124512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.757 [2024-12-14 06:48:09.512602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.512614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:124528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.757 [2024-12-14 06:48:09.512732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.512753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:124536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.757 [2024-12-14 06:48:09.512763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.512913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:124552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.757 [2024-12-14 06:48:09.513019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.513036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.757 [2024-12-14 06:48:09.513046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.513057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:124584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.757 [2024-12-14 06:48:09.513067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.513078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:124608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.757 [2024-12-14 06:48:09.513087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.513099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:124632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.757 [2024-12-14 06:48:09.513109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.513120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:125152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.757 [2024-12-14 06:48:09.513129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.513468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.757 [2024-12-14 06:48:09.513483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.513494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.757 [2024-12-14 06:48:09.513504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.513515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:125176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.757 [2024-12-14 06:48:09.513525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.513536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:125184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.757 [2024-12-14 06:48:09.513797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.513813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:125192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.757 [2024-12-14 06:48:09.513823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.757 [2024-12-14 06:48:09.513835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:125200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.757 [2024-12-14 06:48:09.513845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.758 [2024-12-14 06:48:09.513856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.758 [2024-12-14 06:48:09.513866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.758 [2024-12-14 06:48:09.513877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:125216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.758 [2024-12-14 06:48:09.514003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.758 [2024-12-14 06:48:09.514017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:125224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.758 [2024-12-14 06:48:09.514027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.758 [2024-12-14 06:48:09.514165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:125232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.758 [2024-12-14 06:48:09.514179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.758 [2024-12-14 06:48:09.514190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.758 [2024-12-14 06:48:09.514341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.758 [2024-12-14 06:48:09.514644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:125248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.758 [2024-12-14 06:48:09.514757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.758 [2024-12-14 06:48:09.514773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:125256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.758 [2024-12-14 06:48:09.514783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.758 [2024-12-14 06:48:09.514796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:125264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.758 [2024-12-14 06:48:09.514805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.758 [2024-12-14 06:48:09.514816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:125272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.758 [2024-12-14 06:48:09.514826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.758 [2024-12-14 06:48:09.514837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:125280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.758 [2024-12-14 06:48:09.514846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.758 [2024-12-14 06:48:09.514858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:125288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.758 [2024-12-14 06:48:09.514987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.758 [2024-12-14 06:48:09.515002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:125296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.758 [2024-12-14 06:48:09.515144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.758 [2024-12-14 06:48:09.515358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:125304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.758 [2024-12-14 06:48:09.515372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.758 [2024-12-14 06:48:09.515384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:125312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.758 [2024-12-14 06:48:09.515393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.758 [2024-12-14 06:48:09.515405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:125320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.758 [2024-12-14 06:48:09.515414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.758 [2024-12-14 06:48:09.515425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:125328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:55.758 [2024-12-14 06:48:09.515555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.758 [2024-12-14 06:48:09.515568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:125336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.758 [2024-12-14 06:48:09.515796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.758 [2024-12-14 06:48:09.515812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:124640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.758 [2024-12-14 06:48:09.515822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.758 [2024-12-14 06:48:09.515834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:124648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.758 [2024-12-14 06:48:09.515843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.758 [2024-12-14 06:48:09.515855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:124656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.758 [2024-12-14 06:48:09.515864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.758 [2024-12-14 06:48:09.515875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:124664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.758 [2024-12-14 06:48:09.515899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.758 [2024-12-14 06:48:09.516001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:124672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.758 [2024-12-14 06:48:09.516017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.758 [2024-12-14 06:48:09.516029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.758 [2024-12-14 06:48:09.516039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.758 [2024-12-14 06:48:09.516173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:124688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.758 [2024-12-14 06:48:09.516187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.758 [2024-12-14 06:48:09.516198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1784cc0 is same with the state(5) to be set 00:17:55.758 [2024-12-14 06:48:09.516211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:55.758 [2024-12-14 06:48:09.516455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:55.758 [2024-12-14 06:48:09.516467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124696 len:8 PRP1 0x0 PRP2 0x0 00:17:55.758 [2024-12-14 06:48:09.516479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.758 [2024-12-14 06:48:09.516524] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1784cc0 was disconnected and freed. reset controller. 00:17:55.758 [2024-12-14 06:48:09.516869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.758 [2024-12-14 06:48:09.516910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.758 [2024-12-14 06:48:09.516923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.758 [2024-12-14 06:48:09.516932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.758 [2024-12-14 06:48:09.516942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.758 [2024-12-14 06:48:09.516951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.758 [2024-12-14 06:48:09.516961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:55.758 [2024-12-14 06:48:09.516970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:55.758 [2024-12-14 06:48:09.517212] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e010 is same with the state(5) to be set 00:17:55.758 [2024-12-14 06:48:09.517617] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:55.758 [2024-12-14 06:48:09.517684] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170e010 (9): Bad file descriptor 00:17:55.758 [2024-12-14 06:48:09.517788] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:55.758 [2024-12-14 06:48:09.518075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:55.758 [2024-12-14 06:48:09.518140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:55.758 [2024-12-14 06:48:09.518158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170e010 with addr=10.0.0.2, port=4420 00:17:55.758 [2024-12-14 06:48:09.518169] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e010 is same with the state(5) to be set 00:17:55.758 [2024-12-14 06:48:09.518415] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170e010 (9): Bad file descriptor 00:17:55.758 [2024-12-14 06:48:09.518435] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:55.758 [2024-12-14 06:48:09.518445] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:55.758 [2024-12-14 06:48:09.518456] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:55.758 [2024-12-14 06:48:09.518673] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:55.758 [2024-12-14 06:48:09.518701] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:55.758 06:48:09 -- host/timeout.sh@101 -- # sleep 3 00:17:56.691 [2024-12-14 06:48:10.518819] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:56.691 [2024-12-14 06:48:10.518975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:56.691 [2024-12-14 06:48:10.519022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:56.691 [2024-12-14 06:48:10.519040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170e010 with addr=10.0.0.2, port=4420 00:17:56.691 [2024-12-14 06:48:10.519053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e010 is same with the state(5) to be set 00:17:56.691 [2024-12-14 06:48:10.519079] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170e010 (9): Bad file descriptor 00:17:56.692 [2024-12-14 06:48:10.519097] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:56.692 [2024-12-14 06:48:10.519107] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:56.692 [2024-12-14 06:48:10.519117] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:56.692 [2024-12-14 06:48:10.519146] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:56.692 [2024-12-14 06:48:10.519469] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:57.624 [2024-12-14 06:48:11.519624] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:57.624 [2024-12-14 06:48:11.519778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:57.624 [2024-12-14 06:48:11.519822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:57.624 [2024-12-14 06:48:11.519838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170e010 with addr=10.0.0.2, port=4420 00:17:57.624 [2024-12-14 06:48:11.519852] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e010 is same with the state(5) to be set 00:17:57.624 [2024-12-14 06:48:11.519881] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170e010 (9): Bad file descriptor 00:17:57.624 [2024-12-14 06:48:11.519912] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:57.624 [2024-12-14 06:48:11.519923] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:57.624 [2024-12-14 06:48:11.519934] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:57.624 [2024-12-14 06:48:11.519962] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:57.624 [2024-12-14 06:48:11.519974] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:58.558 [2024-12-14 06:48:12.520620] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:58.558 [2024-12-14 06:48:12.520745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:58.558 [2024-12-14 06:48:12.520788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:58.558 [2024-12-14 06:48:12.520804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170e010 with addr=10.0.0.2, port=4420 00:17:58.558 [2024-12-14 06:48:12.520816] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e010 is same with the state(5) to be set 00:17:58.558 [2024-12-14 06:48:12.521338] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170e010 (9): Bad file descriptor 00:17:58.558 [2024-12-14 06:48:12.521486] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:58.558 [2024-12-14 06:48:12.521514] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:58.558 [2024-12-14 06:48:12.521523] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:58.558 [2024-12-14 06:48:12.524089] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:58.558 [2024-12-14 06:48:12.524123] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:58.558 06:48:12 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:58.817 [2024-12-14 06:48:12.801542] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:59.074 06:48:12 -- host/timeout.sh@103 -- # wait 73903 00:17:59.640 [2024-12-14 06:48:13.549790] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:04.907 00:18:04.907 Latency(us) 00:18:04.907 [2024-12-14T06:48:18.899Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.907 [2024-12-14T06:48:18.899Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:04.907 Verification LBA range: start 0x0 length 0x4000 00:18:04.907 NVMe0n1 : 10.01 8320.86 32.50 6077.79 0.00 8867.75 430.08 3019898.88 00:18:04.907 [2024-12-14T06:48:18.899Z] =================================================================================================================== 00:18:04.907 [2024-12-14T06:48:18.900Z] Total : 8320.86 32.50 6077.79 0.00 8867.75 0.00 3019898.88 00:18:04.908 0 00:18:04.908 06:48:18 -- host/timeout.sh@105 -- # killprocess 73773 00:18:04.908 06:48:18 -- common/autotest_common.sh@936 -- # '[' -z 73773 ']' 00:18:04.908 06:48:18 -- common/autotest_common.sh@940 -- # kill -0 73773 00:18:04.908 06:48:18 -- common/autotest_common.sh@941 -- # uname 00:18:04.908 06:48:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:04.908 06:48:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73773 00:18:04.908 killing process with pid 73773 00:18:04.908 Received shutdown signal, test time was about 10.000000 seconds 00:18:04.908 00:18:04.908 Latency(us) 00:18:04.908 [2024-12-14T06:48:18.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.908 [2024-12-14T06:48:18.900Z] =================================================================================================================== 00:18:04.908 [2024-12-14T06:48:18.900Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:04.908 06:48:18 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:04.908 06:48:18 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:04.908 06:48:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73773' 00:18:04.908 06:48:18 -- common/autotest_common.sh@955 -- # kill 73773 00:18:04.908 06:48:18 -- common/autotest_common.sh@960 -- # wait 73773 00:18:04.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:04.908 06:48:18 -- host/timeout.sh@110 -- # bdevperf_pid=74023 00:18:04.908 06:48:18 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:18:04.908 06:48:18 -- host/timeout.sh@112 -- # waitforlisten 74023 /var/tmp/bdevperf.sock 00:18:04.908 06:48:18 -- common/autotest_common.sh@829 -- # '[' -z 74023 ']' 00:18:04.908 06:48:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:04.908 06:48:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:04.908 06:48:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:04.908 06:48:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:04.908 06:48:18 -- common/autotest_common.sh@10 -- # set +x 00:18:04.908 [2024-12-14 06:48:18.670986] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:04.908 [2024-12-14 06:48:18.671085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74023 ] 00:18:04.908 [2024-12-14 06:48:18.808475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.908 [2024-12-14 06:48:18.863548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:05.843 06:48:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:05.843 06:48:19 -- common/autotest_common.sh@862 -- # return 0 00:18:05.843 06:48:19 -- host/timeout.sh@116 -- # dtrace_pid=74039 00:18:05.843 06:48:19 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 74023 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:18:05.843 06:48:19 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:18:06.101 06:48:19 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:06.360 NVMe0n1 00:18:06.360 06:48:20 -- host/timeout.sh@124 -- # rpc_pid=74075 00:18:06.360 06:48:20 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:06.360 06:48:20 -- host/timeout.sh@125 -- # sleep 1 00:18:06.360 Running I/O for 10 seconds... 00:18:07.296 06:48:21 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:07.558 [2024-12-14 06:48:21.503424] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503490] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503516] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503525] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503532] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503540] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503548] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503556] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503564] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503573] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503581] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503588] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503595] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503603] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503611] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503618] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503626] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503633] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503641] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503648] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503655] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503663] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503670] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503678] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503685] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503692] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503700] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503707] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503715] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503722] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503730] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503738] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503746] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503791] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503800] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503808] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503816] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.558 [2024-12-14 06:48:21.503824] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.503832] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.503840] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.503848] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.503856] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.503864] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.503872] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.503880] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.503904] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.503912] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.503926] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.503934] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.503955] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.503965] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.503973] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.503981] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.503990] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.503998] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504006] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504014] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504022] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504030] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504038] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504046] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504054] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504063] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504072] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504080] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504090] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504098] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504107] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504115] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504123] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504131] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504139] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504147] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504155] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504163] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504171] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504180] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504188] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504196] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504204] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504212] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504220] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504228] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504237] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504245] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504253] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504261] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504269] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504277] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504285] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504308] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504316] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504325] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504333] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504341] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504349] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504357] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504366] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504374] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504383] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504391] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504399] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504407] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504414] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504422] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504430] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504438] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504447] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504455] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504463] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504471] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504479] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504487] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504495] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504503] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504511] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504524] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504531] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504539] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.559 [2024-12-14 06:48:21.504547] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.560 [2024-12-14 06:48:21.504555] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.560 [2024-12-14 06:48:21.504563] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.560 [2024-12-14 06:48:21.504571] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.560 [2024-12-14 06:48:21.504579] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.560 [2024-12-14 06:48:21.504587] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.560 [2024-12-14 06:48:21.504595] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5907f0 is same with the state(5) to be set 00:18:07.560 [2024-12-14 06:48:21.504803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.505047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.505090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.505104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.505117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:68728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.505126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.505137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.505147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.505158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:68488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.505167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.505178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.505187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.505473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.505560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.505576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:111472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.505586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.505597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.505606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.505617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.505731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.505751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:70536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.505761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.505772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:51312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.505900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.506025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:43480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.506051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.506197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:58808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.506307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.506321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.506332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.506486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:54384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.506624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.506746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:68144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.506766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.506780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.506926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.507182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.507197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.507208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:115480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.507217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.507229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.507239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.507250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.507259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.507270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.507279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.507501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.507515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.507526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.507535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.507547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:39016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.507667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.507695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:105416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.507796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.507818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.507829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.507841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:121656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.508049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.508066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:87040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.508076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.508087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:86112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.508097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.508115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.508124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.508258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.508270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.508282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:45384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.508400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.508421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.508431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.508509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.508522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.560 [2024-12-14 06:48:21.508533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:123784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.560 [2024-12-14 06:48:21.508543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.508554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:93104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.508563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.508574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:68872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.508584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.508819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.508838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.508850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:92112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.508860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.508872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:32304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.508900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.508913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.508922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.508933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:37680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.508942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.508954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.509023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.509040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.509049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.509060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.509070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.509084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:29224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.509093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.509208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:53248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.509221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.509233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.509242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.509253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:35496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.509263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.509337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:106336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.509351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.509363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:130192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.509372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.509383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:40000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.509392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.509537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.509684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.509793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:31256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.509807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.509819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.509828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.509961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.509978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.509990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.510000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.510194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.510209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.510220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:53904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.510230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.510241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.510250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.510262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.510271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.510400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:92760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.510416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.510428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:104856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.510437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.510572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.510674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.510698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.510709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.510721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.510852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.510869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.511025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.511103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.511116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.511128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.511138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.511150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.511160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.511171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.511180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.511191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.511326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.511539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:57840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.561 [2024-12-14 06:48:21.511552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.561 [2024-12-14 06:48:21.511565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.511574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.511585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:57992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.511595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.511606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.511851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.511868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.511891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.511908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.511918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.511930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.511940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.511951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.512082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.512157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.512167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.512179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:127712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.512189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.512200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.512209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.512220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.512229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.512241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.512362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.512383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:114904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.512393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.512505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.512518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.512530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.512660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.512678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.512805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.512828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:91792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.512972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.513070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.513083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.513095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.513104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.513116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.513126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.513138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.513147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.513158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:90928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.513224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.513242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.513252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.513263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:33424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.513272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.513284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.513293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.513420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.513434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.513446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.513590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.513674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.513687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.513699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.513708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.513720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.513729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.513740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.513749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.513891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:71832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.513909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.514146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.514168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.514181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:128024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.514190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.514363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:130408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.514501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.514606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:76512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.514623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.514635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.514644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.514759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.514780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.562 [2024-12-14 06:48:21.514795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.562 [2024-12-14 06:48:21.514915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.563 [2024-12-14 06:48:21.514940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.563 [2024-12-14 06:48:21.515074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.563 [2024-12-14 06:48:21.515211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.563 [2024-12-14 06:48:21.515345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.563 [2024-12-14 06:48:21.515478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.563 [2024-12-14 06:48:21.515499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.563 [2024-12-14 06:48:21.515597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.563 [2024-12-14 06:48:21.515609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.563 [2024-12-14 06:48:21.515621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.563 [2024-12-14 06:48:21.515630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.563 [2024-12-14 06:48:21.515893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.563 [2024-12-14 06:48:21.515920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.563 [2024-12-14 06:48:21.515933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:28568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.563 [2024-12-14 06:48:21.515943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.563 [2024-12-14 06:48:21.515955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.563 [2024-12-14 06:48:21.515964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.563 [2024-12-14 06:48:21.516048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:58704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.563 [2024-12-14 06:48:21.516071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.563 [2024-12-14 06:48:21.516083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.563 [2024-12-14 06:48:21.516092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.563 [2024-12-14 06:48:21.516213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.563 [2024-12-14 06:48:21.516239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.563 [2024-12-14 06:48:21.516341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.563 [2024-12-14 06:48:21.516353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.563 [2024-12-14 06:48:21.516365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.563 [2024-12-14 06:48:21.516375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.563 [2024-12-14 06:48:21.516509] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4c0c0 is same with the state(5) to be set 00:18:07.563 [2024-12-14 06:48:21.516618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:07.563 [2024-12-14 06:48:21.516630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:07.563 [2024-12-14 06:48:21.516639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6296 len:8 PRP1 0x0 PRP2 0x0 00:18:07.563 [2024-12-14 06:48:21.516650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.563 [2024-12-14 06:48:21.516899] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b4c0c0 was disconnected and freed. reset controller. 00:18:07.563 [2024-12-14 06:48:21.517123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.563 [2024-12-14 06:48:21.517151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.563 [2024-12-14 06:48:21.517164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.563 [2024-12-14 06:48:21.517174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.563 [2024-12-14 06:48:21.517184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.563 [2024-12-14 06:48:21.517193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.563 [2024-12-14 06:48:21.517203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.563 [2024-12-14 06:48:21.517330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.563 [2024-12-14 06:48:21.517470] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9010 is same with the state(5) to be set 00:18:07.563 [2024-12-14 06:48:21.518030] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:07.563 [2024-12-14 06:48:21.518068] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae9010 (9): Bad file descriptor 00:18:07.563 [2024-12-14 06:48:21.518367] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:07.563 [2024-12-14 06:48:21.518447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:07.563 [2024-12-14 06:48:21.518692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:07.563 [2024-12-14 06:48:21.518725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae9010 with addr=10.0.0.2, port=4420 00:18:07.563 [2024-12-14 06:48:21.518738] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9010 is same with the state(5) to be set 00:18:07.563 [2024-12-14 06:48:21.518761] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae9010 (9): Bad file descriptor 00:18:07.563 [2024-12-14 06:48:21.518901] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:07.563 [2024-12-14 06:48:21.519032] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:07.563 [2024-12-14 06:48:21.519047] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:07.563 [2024-12-14 06:48:21.519072] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:07.563 [2024-12-14 06:48:21.519083] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:07.563 06:48:21 -- host/timeout.sh@128 -- # wait 74075 00:18:10.097 [2024-12-14 06:48:23.519400] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:10.097 [2024-12-14 06:48:23.519514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:10.097 [2024-12-14 06:48:23.519556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:10.097 [2024-12-14 06:48:23.519572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae9010 with addr=10.0.0.2, port=4420 00:18:10.097 [2024-12-14 06:48:23.519586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9010 is same with the state(5) to be set 00:18:10.097 [2024-12-14 06:48:23.519612] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae9010 (9): Bad file descriptor 00:18:10.097 [2024-12-14 06:48:23.519629] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:10.097 [2024-12-14 06:48:23.519639] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:10.097 [2024-12-14 06:48:23.519648] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:10.097 [2024-12-14 06:48:23.519673] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:10.097 [2024-12-14 06:48:23.519684] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:11.999 [2024-12-14 06:48:25.519873] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:11.999 [2024-12-14 06:48:25.520007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:11.999 [2024-12-14 06:48:25.520052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:11.999 [2024-12-14 06:48:25.520069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae9010 with addr=10.0.0.2, port=4420 00:18:11.999 [2024-12-14 06:48:25.520099] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae9010 is same with the state(5) to be set 00:18:11.999 [2024-12-14 06:48:25.520346] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae9010 (9): Bad file descriptor 00:18:11.999 [2024-12-14 06:48:25.520436] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:11.999 [2024-12-14 06:48:25.520450] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:11.999 [2024-12-14 06:48:25.520460] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:11.999 [2024-12-14 06:48:25.520487] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:11.999 [2024-12-14 06:48:25.520783] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:13.901 [2024-12-14 06:48:27.520851] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:13.901 [2024-12-14 06:48:27.520927] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:13.901 [2024-12-14 06:48:27.520955] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:13.901 [2024-12-14 06:48:27.520967] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:18:13.901 [2024-12-14 06:48:27.520992] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:14.839 00:18:14.839 Latency(us) 00:18:14.839 [2024-12-14T06:48:28.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.839 [2024-12-14T06:48:28.831Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:18:14.839 NVMe0n1 : 8.17 2191.80 8.56 15.67 0.00 58037.58 7357.91 7046430.72 00:18:14.839 [2024-12-14T06:48:28.831Z] =================================================================================================================== 00:18:14.839 [2024-12-14T06:48:28.831Z] Total : 2191.80 8.56 15.67 0.00 58037.58 7357.91 7046430.72 00:18:14.839 0 00:18:14.839 06:48:28 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:14.839 Attaching 5 probes... 00:18:14.839 1360.466675: reset bdev controller NVMe0 00:18:14.839 1360.551141: reconnect bdev controller NVMe0 00:18:14.839 3361.659195: reconnect delay bdev controller NVMe0 00:18:14.839 3361.696231: reconnect bdev controller NVMe0 00:18:14.839 5362.153242: reconnect delay bdev controller NVMe0 00:18:14.839 5362.173183: reconnect bdev controller NVMe0 00:18:14.839 7363.271637: reconnect delay bdev controller NVMe0 00:18:14.839 7363.288255: reconnect bdev controller NVMe0 00:18:14.839 06:48:28 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:18:14.839 06:48:28 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:18:14.839 06:48:28 -- host/timeout.sh@136 -- # kill 74039 00:18:14.839 06:48:28 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:14.839 06:48:28 -- host/timeout.sh@139 -- # killprocess 74023 00:18:14.839 06:48:28 -- common/autotest_common.sh@936 -- # '[' -z 74023 ']' 00:18:14.839 06:48:28 -- common/autotest_common.sh@940 -- # kill -0 74023 00:18:14.839 06:48:28 -- common/autotest_common.sh@941 -- # uname 00:18:14.839 06:48:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:14.839 06:48:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74023 00:18:14.839 killing process with pid 74023 00:18:14.839 Received shutdown signal, test time was about 8.237969 seconds 00:18:14.839 00:18:14.839 Latency(us) 00:18:14.839 [2024-12-14T06:48:28.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.839 [2024-12-14T06:48:28.831Z] =================================================================================================================== 00:18:14.839 [2024-12-14T06:48:28.831Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:14.839 06:48:28 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:14.839 06:48:28 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:14.839 06:48:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74023' 00:18:14.839 06:48:28 -- common/autotest_common.sh@955 -- # kill 74023 00:18:14.839 06:48:28 -- common/autotest_common.sh@960 -- # wait 74023 00:18:14.839 06:48:28 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:15.097 06:48:29 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:18:15.097 06:48:29 -- host/timeout.sh@145 -- # nvmftestfini 00:18:15.097 06:48:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:15.097 06:48:29 -- nvmf/common.sh@116 -- # sync 00:18:15.357 06:48:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:15.357 06:48:29 -- nvmf/common.sh@119 -- # set +e 00:18:15.357 06:48:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:15.357 06:48:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:15.357 rmmod nvme_tcp 00:18:15.357 rmmod nvme_fabrics 00:18:15.357 rmmod nvme_keyring 00:18:15.357 06:48:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:15.357 06:48:29 -- nvmf/common.sh@123 -- # set -e 00:18:15.357 06:48:29 -- nvmf/common.sh@124 -- # return 0 00:18:15.357 06:48:29 -- nvmf/common.sh@477 -- # '[' -n 73572 ']' 00:18:15.357 06:48:29 -- nvmf/common.sh@478 -- # killprocess 73572 00:18:15.357 06:48:29 -- common/autotest_common.sh@936 -- # '[' -z 73572 ']' 00:18:15.357 06:48:29 -- common/autotest_common.sh@940 -- # kill -0 73572 00:18:15.357 06:48:29 -- common/autotest_common.sh@941 -- # uname 00:18:15.357 06:48:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:15.357 06:48:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73572 00:18:15.357 killing process with pid 73572 00:18:15.357 06:48:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:15.357 06:48:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:15.357 06:48:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73572' 00:18:15.357 06:48:29 -- common/autotest_common.sh@955 -- # kill 73572 00:18:15.357 06:48:29 -- common/autotest_common.sh@960 -- # wait 73572 00:18:15.616 06:48:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:15.616 06:48:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:15.616 06:48:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:15.616 06:48:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:15.616 06:48:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:15.616 06:48:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.616 06:48:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.616 06:48:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.616 06:48:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:15.616 ************************************ 00:18:15.616 END TEST nvmf_timeout 00:18:15.616 ************************************ 00:18:15.616 00:18:15.616 real 0m47.241s 00:18:15.616 user 2m18.944s 00:18:15.616 sys 0m5.398s 00:18:15.616 06:48:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:15.616 06:48:29 -- common/autotest_common.sh@10 -- # set +x 00:18:15.616 06:48:29 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:18:15.616 06:48:29 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:18:15.616 06:48:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:15.616 06:48:29 -- common/autotest_common.sh@10 -- # set +x 00:18:15.616 06:48:29 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:18:15.616 00:18:15.616 real 10m32.496s 00:18:15.616 user 29m35.058s 00:18:15.616 sys 3m19.412s 00:18:15.616 06:48:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:15.616 06:48:29 -- common/autotest_common.sh@10 -- # set +x 00:18:15.616 ************************************ 00:18:15.616 END TEST nvmf_tcp 00:18:15.616 ************************************ 00:18:15.616 06:48:29 -- spdk/autotest.sh@283 -- # [[ 1 -eq 0 ]] 00:18:15.616 06:48:29 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:15.616 06:48:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:15.616 06:48:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:15.616 06:48:29 -- common/autotest_common.sh@10 -- # set +x 00:18:15.616 ************************************ 00:18:15.616 START TEST nvmf_dif 00:18:15.616 ************************************ 00:18:15.616 06:48:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:18:15.875 * Looking for test storage... 00:18:15.876 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:15.876 06:48:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:15.876 06:48:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:15.876 06:48:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:15.876 06:48:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:15.876 06:48:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:15.876 06:48:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:15.876 06:48:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:15.876 06:48:29 -- scripts/common.sh@335 -- # IFS=.-: 00:18:15.876 06:48:29 -- scripts/common.sh@335 -- # read -ra ver1 00:18:15.876 06:48:29 -- scripts/common.sh@336 -- # IFS=.-: 00:18:15.876 06:48:29 -- scripts/common.sh@336 -- # read -ra ver2 00:18:15.876 06:48:29 -- scripts/common.sh@337 -- # local 'op=<' 00:18:15.876 06:48:29 -- scripts/common.sh@339 -- # ver1_l=2 00:18:15.876 06:48:29 -- scripts/common.sh@340 -- # ver2_l=1 00:18:15.876 06:48:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:15.876 06:48:29 -- scripts/common.sh@343 -- # case "$op" in 00:18:15.876 06:48:29 -- scripts/common.sh@344 -- # : 1 00:18:15.876 06:48:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:15.876 06:48:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:15.876 06:48:29 -- scripts/common.sh@364 -- # decimal 1 00:18:15.876 06:48:29 -- scripts/common.sh@352 -- # local d=1 00:18:15.876 06:48:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:15.876 06:48:29 -- scripts/common.sh@354 -- # echo 1 00:18:15.876 06:48:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:15.876 06:48:29 -- scripts/common.sh@365 -- # decimal 2 00:18:15.876 06:48:29 -- scripts/common.sh@352 -- # local d=2 00:18:15.876 06:48:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:15.876 06:48:29 -- scripts/common.sh@354 -- # echo 2 00:18:15.876 06:48:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:15.876 06:48:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:15.876 06:48:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:15.876 06:48:29 -- scripts/common.sh@367 -- # return 0 00:18:15.876 06:48:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:15.876 06:48:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:15.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.876 --rc genhtml_branch_coverage=1 00:18:15.876 --rc genhtml_function_coverage=1 00:18:15.876 --rc genhtml_legend=1 00:18:15.876 --rc geninfo_all_blocks=1 00:18:15.876 --rc geninfo_unexecuted_blocks=1 00:18:15.876 00:18:15.876 ' 00:18:15.876 06:48:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:15.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.876 --rc genhtml_branch_coverage=1 00:18:15.876 --rc genhtml_function_coverage=1 00:18:15.876 --rc genhtml_legend=1 00:18:15.876 --rc geninfo_all_blocks=1 00:18:15.876 --rc geninfo_unexecuted_blocks=1 00:18:15.876 00:18:15.876 ' 00:18:15.876 06:48:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:15.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.876 --rc genhtml_branch_coverage=1 00:18:15.876 --rc genhtml_function_coverage=1 00:18:15.876 --rc genhtml_legend=1 00:18:15.876 --rc geninfo_all_blocks=1 00:18:15.876 --rc geninfo_unexecuted_blocks=1 00:18:15.876 00:18:15.876 ' 00:18:15.876 06:48:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:15.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:15.876 --rc genhtml_branch_coverage=1 00:18:15.876 --rc genhtml_function_coverage=1 00:18:15.876 --rc genhtml_legend=1 00:18:15.876 --rc geninfo_all_blocks=1 00:18:15.876 --rc geninfo_unexecuted_blocks=1 00:18:15.876 00:18:15.876 ' 00:18:15.876 06:48:29 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:15.876 06:48:29 -- nvmf/common.sh@7 -- # uname -s 00:18:15.876 06:48:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:15.876 06:48:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:15.876 06:48:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:15.876 06:48:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:15.876 06:48:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:15.876 06:48:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:15.876 06:48:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:15.876 06:48:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:15.876 06:48:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:15.876 06:48:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:15.876 06:48:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 00:18:15.876 06:48:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=1897a557-42a7-4044-982a-fbab8b2b3e32 00:18:15.876 06:48:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:15.876 06:48:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:15.876 06:48:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:15.876 06:48:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:15.876 06:48:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:15.876 06:48:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:15.876 06:48:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:15.876 06:48:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.876 06:48:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.876 06:48:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.876 06:48:29 -- paths/export.sh@5 -- # export PATH 00:18:15.876 06:48:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.876 06:48:29 -- nvmf/common.sh@46 -- # : 0 00:18:15.876 06:48:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:15.876 06:48:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:15.876 06:48:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:15.876 06:48:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:15.876 06:48:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:15.876 06:48:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:15.876 06:48:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:15.876 06:48:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:15.876 06:48:29 -- target/dif.sh@15 -- # NULL_META=16 00:18:15.876 06:48:29 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:18:15.876 06:48:29 -- target/dif.sh@15 -- # NULL_SIZE=64 00:18:15.876 06:48:29 -- target/dif.sh@15 -- # NULL_DIF=1 00:18:15.876 06:48:29 -- target/dif.sh@135 -- # nvmftestinit 00:18:15.876 06:48:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:15.876 06:48:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:15.876 06:48:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:15.876 06:48:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:15.876 06:48:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:15.876 06:48:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.876 06:48:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:18:15.876 06:48:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.876 06:48:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:15.876 06:48:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:15.876 06:48:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:15.876 06:48:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:15.876 06:48:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:15.876 06:48:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:15.877 06:48:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:15.877 06:48:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:15.877 06:48:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:15.877 06:48:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:15.877 06:48:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:15.877 06:48:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:15.877 06:48:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:15.877 06:48:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:15.877 06:48:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:15.877 06:48:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:15.877 06:48:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:15.877 06:48:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:15.877 06:48:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:15.877 06:48:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:15.877 Cannot find device "nvmf_tgt_br" 00:18:15.877 06:48:29 -- nvmf/common.sh@154 -- # true 00:18:15.877 06:48:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:15.877 Cannot find device "nvmf_tgt_br2" 00:18:15.877 06:48:29 -- nvmf/common.sh@155 -- # true 00:18:15.877 06:48:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:15.877 06:48:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:15.877 Cannot find device "nvmf_tgt_br" 00:18:15.877 06:48:29 -- nvmf/common.sh@157 -- # true 00:18:15.877 06:48:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:15.877 Cannot find device "nvmf_tgt_br2" 00:18:15.877 06:48:29 -- nvmf/common.sh@158 -- # true 00:18:15.877 06:48:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:16.136 06:48:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:16.136 06:48:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:16.136 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:16.136 06:48:29 -- nvmf/common.sh@161 -- # true 00:18:16.136 06:48:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:16.136 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:16.136 06:48:29 -- nvmf/common.sh@162 -- # true 00:18:16.136 06:48:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:16.136 06:48:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:16.136 06:48:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:16.136 06:48:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:16.136 06:48:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:16.136 06:48:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:16.136 06:48:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:16.136 06:48:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:16.136 06:48:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:16.136 06:48:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:16.136 06:48:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:16.136 06:48:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:16.136 06:48:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:16.137 06:48:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:16.137 06:48:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:16.137 06:48:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:16.137 06:48:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:16.137 06:48:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:16.137 06:48:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:16.137 06:48:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:16.137 06:48:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:16.137 06:48:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:16.137 06:48:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:16.137 06:48:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:16.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:16.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:18:16.137 00:18:16.137 --- 10.0.0.2 ping statistics --- 00:18:16.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.137 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:18:16.137 06:48:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:16.137 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:16.137 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:18:16.137 00:18:16.137 --- 10.0.0.3 ping statistics --- 00:18:16.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.137 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:18:16.137 06:48:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:16.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:16.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:18:16.137 00:18:16.137 --- 10.0.0.1 ping statistics --- 00:18:16.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.137 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:18:16.137 06:48:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:16.137 06:48:30 -- nvmf/common.sh@421 -- # return 0 00:18:16.137 06:48:30 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:18:16.137 06:48:30 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:16.704 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:16.704 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:16.704 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:16.704 06:48:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:16.704 06:48:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:16.704 06:48:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:16.704 06:48:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:16.704 06:48:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:16.704 06:48:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:16.704 06:48:30 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:18:16.704 06:48:30 -- target/dif.sh@137 -- # nvmfappstart 00:18:16.704 06:48:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:16.704 06:48:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:16.704 06:48:30 -- common/autotest_common.sh@10 -- # set +x 00:18:16.704 06:48:30 -- nvmf/common.sh@469 -- # nvmfpid=74525 00:18:16.704 06:48:30 -- nvmf/common.sh@470 -- # waitforlisten 74525 00:18:16.704 06:48:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:16.704 06:48:30 -- common/autotest_common.sh@829 -- # '[' -z 74525 ']' 00:18:16.704 06:48:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.704 06:48:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:16.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.704 06:48:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.704 06:48:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:16.704 06:48:30 -- common/autotest_common.sh@10 -- # set +x 00:18:16.704 [2024-12-14 06:48:30.576278] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:16.704 [2024-12-14 06:48:30.576371] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.963 [2024-12-14 06:48:30.718138] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.963 [2024-12-14 06:48:30.785324] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:16.963 [2024-12-14 06:48:30.785717] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.963 [2024-12-14 06:48:30.785842] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.963 [2024-12-14 06:48:30.785984] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.963 [2024-12-14 06:48:30.786101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.900 06:48:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:17.900 06:48:31 -- common/autotest_common.sh@862 -- # return 0 00:18:17.900 06:48:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:17.900 06:48:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:17.900 06:48:31 -- common/autotest_common.sh@10 -- # set +x 00:18:17.900 06:48:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.900 06:48:31 -- target/dif.sh@139 -- # create_transport 00:18:17.900 06:48:31 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:18:17.900 06:48:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.900 06:48:31 -- common/autotest_common.sh@10 -- # set +x 00:18:17.900 [2024-12-14 06:48:31.607130] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.900 06:48:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.900 06:48:31 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:18:17.900 06:48:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:17.900 06:48:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:17.900 06:48:31 -- common/autotest_common.sh@10 -- # set +x 00:18:17.900 ************************************ 00:18:17.900 START TEST fio_dif_1_default 00:18:17.900 ************************************ 00:18:17.900 06:48:31 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:18:17.900 06:48:31 -- target/dif.sh@86 -- # create_subsystems 0 00:18:17.900 06:48:31 -- target/dif.sh@28 -- # local sub 00:18:17.900 06:48:31 -- target/dif.sh@30 -- # for sub in "$@" 00:18:17.900 06:48:31 -- target/dif.sh@31 -- # create_subsystem 0 00:18:17.900 06:48:31 -- target/dif.sh@18 -- # local sub_id=0 00:18:17.900 06:48:31 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:17.900 06:48:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.900 06:48:31 -- common/autotest_common.sh@10 -- # set +x 00:18:17.900 bdev_null0 00:18:17.900 06:48:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.900 06:48:31 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:17.900 06:48:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.900 06:48:31 -- common/autotest_common.sh@10 -- # set +x 00:18:17.900 06:48:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.900 06:48:31 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:17.900 06:48:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.900 06:48:31 -- common/autotest_common.sh@10 -- # set +x 00:18:17.900 06:48:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.900 06:48:31 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:17.900 06:48:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.900 06:48:31 -- common/autotest_common.sh@10 -- # set +x 00:18:17.901 [2024-12-14 06:48:31.651258] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:17.901 06:48:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.901 06:48:31 -- target/dif.sh@87 -- # fio /dev/fd/62 00:18:17.901 06:48:31 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:18:17.901 06:48:31 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:18:17.901 06:48:31 -- nvmf/common.sh@520 -- # config=() 00:18:17.901 06:48:31 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:17.901 06:48:31 -- nvmf/common.sh@520 -- # local subsystem config 00:18:17.901 06:48:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:17.901 06:48:31 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:17.901 06:48:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:17.901 { 00:18:17.901 "params": { 00:18:17.901 "name": "Nvme$subsystem", 00:18:17.901 "trtype": "$TEST_TRANSPORT", 00:18:17.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:17.901 "adrfam": "ipv4", 00:18:17.901 "trsvcid": "$NVMF_PORT", 00:18:17.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:17.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:17.901 "hdgst": ${hdgst:-false}, 00:18:17.901 "ddgst": ${ddgst:-false} 00:18:17.901 }, 00:18:17.901 "method": "bdev_nvme_attach_controller" 00:18:17.901 } 00:18:17.901 EOF 00:18:17.901 )") 00:18:17.901 06:48:31 -- target/dif.sh@82 -- # gen_fio_conf 00:18:17.901 06:48:31 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:18:17.901 06:48:31 -- target/dif.sh@54 -- # local file 00:18:17.901 06:48:31 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:17.901 06:48:31 -- target/dif.sh@56 -- # cat 00:18:17.901 06:48:31 -- common/autotest_common.sh@1328 -- # local sanitizers 00:18:17.901 06:48:31 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:17.901 06:48:31 -- common/autotest_common.sh@1330 -- # shift 00:18:17.901 06:48:31 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:18:17.901 06:48:31 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:17.901 06:48:31 -- nvmf/common.sh@542 -- # cat 00:18:17.901 06:48:31 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:17.901 06:48:31 -- common/autotest_common.sh@1334 -- # grep libasan 00:18:17.901 06:48:31 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:17.901 06:48:31 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:17.901 06:48:31 -- target/dif.sh@72 -- # (( file <= files )) 00:18:17.901 06:48:31 -- nvmf/common.sh@544 -- # jq . 00:18:17.901 06:48:31 -- nvmf/common.sh@545 -- # IFS=, 00:18:17.901 06:48:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:17.901 "params": { 00:18:17.901 "name": "Nvme0", 00:18:17.901 "trtype": "tcp", 00:18:17.901 "traddr": "10.0.0.2", 00:18:17.901 "adrfam": "ipv4", 00:18:17.901 "trsvcid": "4420", 00:18:17.901 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:17.901 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:17.901 "hdgst": false, 00:18:17.901 "ddgst": false 00:18:17.901 }, 00:18:17.901 "method": "bdev_nvme_attach_controller" 00:18:17.901 }' 00:18:17.901 06:48:31 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:17.901 06:48:31 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:17.901 06:48:31 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:17.901 06:48:31 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:17.901 06:48:31 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:18:17.901 06:48:31 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:17.901 06:48:31 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:17.901 06:48:31 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:17.901 06:48:31 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:17.901 06:48:31 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:17.901 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:17.901 fio-3.35 00:18:17.901 Starting 1 thread 00:18:18.468 [2024-12-14 06:48:32.211455] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:18.468 [2024-12-14 06:48:32.211539] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:28.446 00:18:28.446 filename0: (groupid=0, jobs=1): err= 0: pid=74596: Sat Dec 14 06:48:42 2024 00:18:28.446 read: IOPS=9579, BW=37.4MiB/s (39.2MB/s)(374MiB/10001msec) 00:18:28.446 slat (nsec): min=6095, max=82359, avg=7945.34, stdev=3738.64 00:18:28.446 clat (usec): min=318, max=3342, avg=394.32, stdev=50.32 00:18:28.446 lat (usec): min=324, max=3384, avg=402.27, stdev=51.15 00:18:28.446 clat percentiles (usec): 00:18:28.446 | 1.00th=[ 330], 5.00th=[ 338], 10.00th=[ 347], 20.00th=[ 355], 00:18:28.446 | 30.00th=[ 367], 40.00th=[ 375], 50.00th=[ 383], 60.00th=[ 396], 00:18:28.446 | 70.00th=[ 412], 80.00th=[ 429], 90.00th=[ 457], 95.00th=[ 482], 00:18:28.446 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 570], 99.95th=[ 594], 00:18:28.446 | 99.99th=[ 1549] 00:18:28.446 bw ( KiB/s): min=36960, max=40384, per=99.99%, avg=38311.79, stdev=788.86, samples=19 00:18:28.446 iops : min= 9240, max=10096, avg=9577.95, stdev=197.21, samples=19 00:18:28.446 lat (usec) : 500=97.26%, 750=2.72%, 1000=0.01% 00:18:28.446 lat (msec) : 2=0.01%, 4=0.01% 00:18:28.446 cpu : usr=85.08%, sys=12.88%, ctx=21, majf=0, minf=9 00:18:28.446 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:28.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:28.446 issued rwts: total=95800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:28.446 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:28.446 00:18:28.446 Run status group 0 (all jobs): 00:18:28.446 READ: bw=37.4MiB/s (39.2MB/s), 37.4MiB/s-37.4MiB/s (39.2MB/s-39.2MB/s), io=374MiB (392MB), run=10001-10001msec 00:18:28.710 06:48:42 -- target/dif.sh@88 -- # destroy_subsystems 0 00:18:28.710 06:48:42 -- target/dif.sh@43 -- # local sub 00:18:28.710 06:48:42 -- target/dif.sh@45 -- # for sub in "$@" 00:18:28.710 06:48:42 -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:28.710 06:48:42 -- target/dif.sh@36 -- # local sub_id=0 00:18:28.710 06:48:42 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:28.710 06:48:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.710 06:48:42 -- common/autotest_common.sh@10 -- # set +x 00:18:28.710 06:48:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.710 06:48:42 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:28.710 06:48:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.710 06:48:42 -- common/autotest_common.sh@10 -- # set +x 00:18:28.710 06:48:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.710 00:18:28.710 real 0m10.905s 00:18:28.710 user 0m9.083s 00:18:28.710 sys 0m1.529s 00:18:28.710 06:48:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:28.710 ************************************ 00:18:28.710 END TEST fio_dif_1_default 00:18:28.710 ************************************ 00:18:28.710 06:48:42 -- common/autotest_common.sh@10 -- # set +x 00:18:28.710 06:48:42 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:18:28.710 06:48:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:28.710 06:48:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:28.710 06:48:42 -- common/autotest_common.sh@10 -- # set +x 00:18:28.710 ************************************ 00:18:28.710 START TEST fio_dif_1_multi_subsystems 00:18:28.710 ************************************ 00:18:28.710 06:48:42 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:18:28.710 06:48:42 -- target/dif.sh@92 -- # local files=1 00:18:28.710 06:48:42 -- target/dif.sh@94 -- # create_subsystems 0 1 00:18:28.710 06:48:42 -- target/dif.sh@28 -- # local sub 00:18:28.710 06:48:42 -- target/dif.sh@30 -- # for sub in "$@" 00:18:28.710 06:48:42 -- target/dif.sh@31 -- # create_subsystem 0 00:18:28.710 06:48:42 -- target/dif.sh@18 -- # local sub_id=0 00:18:28.710 06:48:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:28.710 06:48:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.710 06:48:42 -- common/autotest_common.sh@10 -- # set +x 00:18:28.710 bdev_null0 00:18:28.710 06:48:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.710 06:48:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:28.710 06:48:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.710 06:48:42 -- common/autotest_common.sh@10 -- # set +x 00:18:28.710 06:48:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.710 06:48:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:28.710 06:48:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.710 06:48:42 -- common/autotest_common.sh@10 -- # set +x 00:18:28.710 06:48:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.710 06:48:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:28.710 06:48:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.710 06:48:42 -- common/autotest_common.sh@10 -- # set +x 00:18:28.710 [2024-12-14 06:48:42.609930] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.710 06:48:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.710 06:48:42 -- target/dif.sh@30 -- # for sub in "$@" 00:18:28.710 06:48:42 -- target/dif.sh@31 -- # create_subsystem 1 00:18:28.710 06:48:42 -- target/dif.sh@18 -- # local sub_id=1 00:18:28.710 06:48:42 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:18:28.710 06:48:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.710 06:48:42 -- common/autotest_common.sh@10 -- # set +x 00:18:28.710 bdev_null1 00:18:28.710 06:48:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.710 06:48:42 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:18:28.710 06:48:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.710 06:48:42 -- common/autotest_common.sh@10 -- # set +x 00:18:28.710 06:48:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.710 06:48:42 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:18:28.710 06:48:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.710 06:48:42 -- common/autotest_common.sh@10 -- # set +x 00:18:28.710 06:48:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.710 06:48:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:28.710 06:48:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.710 06:48:42 -- common/autotest_common.sh@10 -- # set +x 00:18:28.710 06:48:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.710 06:48:42 -- target/dif.sh@95 -- # fio /dev/fd/62 00:18:28.710 06:48:42 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:18:28.710 06:48:42 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:18:28.710 06:48:42 -- nvmf/common.sh@520 -- # config=() 00:18:28.710 06:48:42 -- nvmf/common.sh@520 -- # local subsystem config 00:18:28.710 06:48:42 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:28.710 06:48:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:28.710 06:48:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:28.710 { 00:18:28.710 "params": { 00:18:28.710 "name": "Nvme$subsystem", 00:18:28.710 "trtype": "$TEST_TRANSPORT", 00:18:28.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:28.710 "adrfam": "ipv4", 00:18:28.710 "trsvcid": "$NVMF_PORT", 00:18:28.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:28.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:28.710 "hdgst": ${hdgst:-false}, 00:18:28.710 "ddgst": ${ddgst:-false} 00:18:28.710 }, 00:18:28.710 "method": "bdev_nvme_attach_controller" 00:18:28.710 } 00:18:28.710 EOF 00:18:28.710 )") 00:18:28.710 06:48:42 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:28.710 06:48:42 -- target/dif.sh@82 -- # gen_fio_conf 00:18:28.710 06:48:42 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:18:28.710 06:48:42 -- target/dif.sh@54 -- # local file 00:18:28.710 06:48:42 -- target/dif.sh@56 -- # cat 00:18:28.710 06:48:42 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:28.710 06:48:42 -- common/autotest_common.sh@1328 -- # local sanitizers 00:18:28.710 06:48:42 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:28.710 06:48:42 -- common/autotest_common.sh@1330 -- # shift 00:18:28.710 06:48:42 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:18:28.710 06:48:42 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:28.710 06:48:42 -- nvmf/common.sh@542 -- # cat 00:18:28.710 06:48:42 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:28.710 06:48:42 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:28.710 06:48:42 -- common/autotest_common.sh@1334 -- # grep libasan 00:18:28.710 06:48:42 -- target/dif.sh@72 -- # (( file <= files )) 00:18:28.710 06:48:42 -- target/dif.sh@73 -- # cat 00:18:28.710 06:48:42 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:28.710 06:48:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:28.710 06:48:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:28.710 { 00:18:28.710 "params": { 00:18:28.710 "name": "Nvme$subsystem", 00:18:28.710 "trtype": "$TEST_TRANSPORT", 00:18:28.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:28.710 "adrfam": "ipv4", 00:18:28.710 "trsvcid": "$NVMF_PORT", 00:18:28.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:28.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:28.710 "hdgst": ${hdgst:-false}, 00:18:28.710 "ddgst": ${ddgst:-false} 00:18:28.710 }, 00:18:28.710 "method": "bdev_nvme_attach_controller" 00:18:28.710 } 00:18:28.710 EOF 00:18:28.710 )") 00:18:28.710 06:48:42 -- target/dif.sh@72 -- # (( file++ )) 00:18:28.710 06:48:42 -- target/dif.sh@72 -- # (( file <= files )) 00:18:28.710 06:48:42 -- nvmf/common.sh@542 -- # cat 00:18:28.711 06:48:42 -- nvmf/common.sh@544 -- # jq . 00:18:28.711 06:48:42 -- nvmf/common.sh@545 -- # IFS=, 00:18:28.711 06:48:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:28.711 "params": { 00:18:28.711 "name": "Nvme0", 00:18:28.711 "trtype": "tcp", 00:18:28.711 "traddr": "10.0.0.2", 00:18:28.711 "adrfam": "ipv4", 00:18:28.711 "trsvcid": "4420", 00:18:28.711 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:28.711 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:28.711 "hdgst": false, 00:18:28.711 "ddgst": false 00:18:28.711 }, 00:18:28.711 "method": "bdev_nvme_attach_controller" 00:18:28.711 },{ 00:18:28.711 "params": { 00:18:28.711 "name": "Nvme1", 00:18:28.711 "trtype": "tcp", 00:18:28.711 "traddr": "10.0.0.2", 00:18:28.711 "adrfam": "ipv4", 00:18:28.711 "trsvcid": "4420", 00:18:28.711 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.711 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:28.711 "hdgst": false, 00:18:28.711 "ddgst": false 00:18:28.711 }, 00:18:28.711 "method": "bdev_nvme_attach_controller" 00:18:28.711 }' 00:18:28.711 06:48:42 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:28.711 06:48:42 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:28.711 06:48:42 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:28.711 06:48:42 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:28.711 06:48:42 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:18:28.711 06:48:42 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:28.977 06:48:42 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:28.977 06:48:42 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:28.977 06:48:42 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:28.977 06:48:42 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:28.977 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:28.977 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:18:28.977 fio-3.35 00:18:28.977 Starting 2 threads 00:18:29.544 [2024-12-14 06:48:43.282162] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:29.544 [2024-12-14 06:48:43.282824] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:39.520 00:18:39.520 filename0: (groupid=0, jobs=1): err= 0: pid=74757: Sat Dec 14 06:48:53 2024 00:18:39.520 read: IOPS=5191, BW=20.3MiB/s (21.3MB/s)(203MiB/10001msec) 00:18:39.520 slat (nsec): min=6295, max=93017, avg=12934.75, stdev=5084.88 00:18:39.520 clat (usec): min=563, max=1164, avg=734.86, stdev=62.64 00:18:39.520 lat (usec): min=582, max=1185, avg=747.80, stdev=63.40 00:18:39.520 clat percentiles (usec): 00:18:39.520 | 1.00th=[ 627], 5.00th=[ 644], 10.00th=[ 660], 20.00th=[ 676], 00:18:39.520 | 30.00th=[ 693], 40.00th=[ 709], 50.00th=[ 725], 60.00th=[ 742], 00:18:39.520 | 70.00th=[ 766], 80.00th=[ 791], 90.00th=[ 824], 95.00th=[ 848], 00:18:39.520 | 99.00th=[ 889], 99.50th=[ 898], 99.90th=[ 930], 99.95th=[ 947], 00:18:39.520 | 99.99th=[ 996] 00:18:39.520 bw ( KiB/s): min=19808, max=22272, per=49.94%, avg=20739.26, stdev=590.72, samples=19 00:18:39.520 iops : min= 4952, max= 5568, avg=5184.79, stdev=147.70, samples=19 00:18:39.520 lat (usec) : 750=63.04%, 1000=36.95% 00:18:39.520 lat (msec) : 2=0.01% 00:18:39.520 cpu : usr=89.94%, sys=8.55%, ctx=13, majf=0, minf=0 00:18:39.520 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:39.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.520 issued rwts: total=51916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.520 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:39.520 filename1: (groupid=0, jobs=1): err= 0: pid=74758: Sat Dec 14 06:48:53 2024 00:18:39.520 read: IOPS=5191, BW=20.3MiB/s (21.3MB/s)(203MiB/10001msec) 00:18:39.520 slat (nsec): min=6350, max=97079, avg=13040.02, stdev=5126.92 00:18:39.520 clat (usec): min=476, max=1175, avg=735.11, stdev=67.13 00:18:39.520 lat (usec): min=483, max=1194, avg=748.15, stdev=68.11 00:18:39.520 clat percentiles (usec): 00:18:39.520 | 1.00th=[ 611], 5.00th=[ 635], 10.00th=[ 652], 20.00th=[ 676], 00:18:39.520 | 30.00th=[ 693], 40.00th=[ 709], 50.00th=[ 725], 60.00th=[ 750], 00:18:39.520 | 70.00th=[ 766], 80.00th=[ 799], 90.00th=[ 832], 95.00th=[ 857], 00:18:39.520 | 99.00th=[ 898], 99.50th=[ 914], 99.90th=[ 947], 99.95th=[ 971], 00:18:39.520 | 99.99th=[ 1123] 00:18:39.520 bw ( KiB/s): min=19808, max=22272, per=49.93%, avg=20737.58, stdev=588.65, samples=19 00:18:39.520 iops : min= 4952, max= 5568, avg=5184.37, stdev=147.19, samples=19 00:18:39.520 lat (usec) : 500=0.01%, 750=62.06%, 1000=37.90% 00:18:39.520 lat (msec) : 2=0.03% 00:18:39.520 cpu : usr=89.76%, sys=8.61%, ctx=14, majf=0, minf=9 00:18:39.520 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:39.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:39.520 issued rwts: total=51916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:39.520 latency : target=0, window=0, percentile=100.00%, depth=4 00:18:39.520 00:18:39.520 Run status group 0 (all jobs): 00:18:39.520 READ: bw=40.6MiB/s (42.5MB/s), 20.3MiB/s-20.3MiB/s (21.3MB/s-21.3MB/s), io=406MiB (425MB), run=10001-10001msec 00:18:39.779 06:48:53 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:18:39.779 06:48:53 -- target/dif.sh@43 -- # local sub 00:18:39.779 06:48:53 -- target/dif.sh@45 -- # for sub in "$@" 00:18:39.779 06:48:53 -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:39.779 06:48:53 -- target/dif.sh@36 -- # local sub_id=0 00:18:39.779 06:48:53 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:39.779 06:48:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.779 06:48:53 -- common/autotest_common.sh@10 -- # set +x 00:18:39.779 06:48:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.779 06:48:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:39.779 06:48:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.779 06:48:53 -- common/autotest_common.sh@10 -- # set +x 00:18:39.779 06:48:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.780 06:48:53 -- target/dif.sh@45 -- # for sub in "$@" 00:18:39.780 06:48:53 -- target/dif.sh@46 -- # destroy_subsystem 1 00:18:39.780 06:48:53 -- target/dif.sh@36 -- # local sub_id=1 00:18:39.780 06:48:53 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:39.780 06:48:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.780 06:48:53 -- common/autotest_common.sh@10 -- # set +x 00:18:39.780 06:48:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.780 06:48:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:18:39.780 06:48:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.780 06:48:53 -- common/autotest_common.sh@10 -- # set +x 00:18:39.780 06:48:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.780 00:18:39.780 real 0m11.036s 00:18:39.780 user 0m18.654s 00:18:39.780 sys 0m1.970s 00:18:39.780 06:48:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:39.780 ************************************ 00:18:39.780 END TEST fio_dif_1_multi_subsystems 00:18:39.780 06:48:53 -- common/autotest_common.sh@10 -- # set +x 00:18:39.780 ************************************ 00:18:39.780 06:48:53 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:18:39.780 06:48:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:39.780 06:48:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:39.780 06:48:53 -- common/autotest_common.sh@10 -- # set +x 00:18:39.780 ************************************ 00:18:39.780 START TEST fio_dif_rand_params 00:18:39.780 ************************************ 00:18:39.780 06:48:53 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:18:39.780 06:48:53 -- target/dif.sh@100 -- # local NULL_DIF 00:18:39.780 06:48:53 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:18:39.780 06:48:53 -- target/dif.sh@103 -- # NULL_DIF=3 00:18:39.780 06:48:53 -- target/dif.sh@103 -- # bs=128k 00:18:39.780 06:48:53 -- target/dif.sh@103 -- # numjobs=3 00:18:39.780 06:48:53 -- target/dif.sh@103 -- # iodepth=3 00:18:39.780 06:48:53 -- target/dif.sh@103 -- # runtime=5 00:18:39.780 06:48:53 -- target/dif.sh@105 -- # create_subsystems 0 00:18:39.780 06:48:53 -- target/dif.sh@28 -- # local sub 00:18:39.780 06:48:53 -- target/dif.sh@30 -- # for sub in "$@" 00:18:39.780 06:48:53 -- target/dif.sh@31 -- # create_subsystem 0 00:18:39.780 06:48:53 -- target/dif.sh@18 -- # local sub_id=0 00:18:39.780 06:48:53 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:18:39.780 06:48:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.780 06:48:53 -- common/autotest_common.sh@10 -- # set +x 00:18:39.780 bdev_null0 00:18:39.780 06:48:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.780 06:48:53 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:39.780 06:48:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.780 06:48:53 -- common/autotest_common.sh@10 -- # set +x 00:18:39.780 06:48:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.780 06:48:53 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:39.780 06:48:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.780 06:48:53 -- common/autotest_common.sh@10 -- # set +x 00:18:39.780 06:48:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.780 06:48:53 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:39.780 06:48:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.780 06:48:53 -- common/autotest_common.sh@10 -- # set +x 00:18:39.780 [2024-12-14 06:48:53.705676] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.780 06:48:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.780 06:48:53 -- target/dif.sh@106 -- # fio /dev/fd/62 00:18:39.780 06:48:53 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:18:39.780 06:48:53 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:18:39.780 06:48:53 -- nvmf/common.sh@520 -- # config=() 00:18:39.780 06:48:53 -- nvmf/common.sh@520 -- # local subsystem config 00:18:39.780 06:48:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:39.780 06:48:53 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:39.780 06:48:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:39.780 { 00:18:39.780 "params": { 00:18:39.780 "name": "Nvme$subsystem", 00:18:39.780 "trtype": "$TEST_TRANSPORT", 00:18:39.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:39.780 "adrfam": "ipv4", 00:18:39.780 "trsvcid": "$NVMF_PORT", 00:18:39.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:39.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:39.780 "hdgst": ${hdgst:-false}, 00:18:39.780 "ddgst": ${ddgst:-false} 00:18:39.780 }, 00:18:39.780 "method": "bdev_nvme_attach_controller" 00:18:39.780 } 00:18:39.780 EOF 00:18:39.780 )") 00:18:39.780 06:48:53 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:39.780 06:48:53 -- target/dif.sh@82 -- # gen_fio_conf 00:18:39.780 06:48:53 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:18:39.780 06:48:53 -- target/dif.sh@54 -- # local file 00:18:39.780 06:48:53 -- target/dif.sh@56 -- # cat 00:18:39.780 06:48:53 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:39.780 06:48:53 -- common/autotest_common.sh@1328 -- # local sanitizers 00:18:39.780 06:48:53 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:39.780 06:48:53 -- common/autotest_common.sh@1330 -- # shift 00:18:39.780 06:48:53 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:18:39.780 06:48:53 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:39.780 06:48:53 -- nvmf/common.sh@542 -- # cat 00:18:39.780 06:48:53 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:39.780 06:48:53 -- common/autotest_common.sh@1334 -- # grep libasan 00:18:39.780 06:48:53 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:39.780 06:48:53 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:39.780 06:48:53 -- target/dif.sh@72 -- # (( file <= files )) 00:18:39.780 06:48:53 -- nvmf/common.sh@544 -- # jq . 00:18:39.780 06:48:53 -- nvmf/common.sh@545 -- # IFS=, 00:18:39.780 06:48:53 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:39.780 "params": { 00:18:39.780 "name": "Nvme0", 00:18:39.780 "trtype": "tcp", 00:18:39.780 "traddr": "10.0.0.2", 00:18:39.780 "adrfam": "ipv4", 00:18:39.780 "trsvcid": "4420", 00:18:39.780 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:39.780 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:39.780 "hdgst": false, 00:18:39.780 "ddgst": false 00:18:39.780 }, 00:18:39.780 "method": "bdev_nvme_attach_controller" 00:18:39.780 }' 00:18:39.780 06:48:53 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:39.780 06:48:53 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:39.780 06:48:53 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:39.780 06:48:53 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:39.780 06:48:53 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:18:39.780 06:48:53 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:40.039 06:48:53 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:40.039 06:48:53 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:40.039 06:48:53 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:40.039 06:48:53 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:40.039 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:18:40.039 ... 00:18:40.039 fio-3.35 00:18:40.039 Starting 3 threads 00:18:40.298 [2024-12-14 06:48:54.248087] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:40.298 [2024-12-14 06:48:54.248173] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:45.562 00:18:45.562 filename0: (groupid=0, jobs=1): err= 0: pid=74908: Sat Dec 14 06:48:59 2024 00:18:45.562 read: IOPS=269, BW=33.7MiB/s (35.3MB/s)(169MiB/5010msec) 00:18:45.562 slat (nsec): min=6768, max=58056, avg=9852.59, stdev=4240.31 00:18:45.562 clat (usec): min=10221, max=12691, avg=11107.30, stdev=387.50 00:18:45.562 lat (usec): min=10228, max=12717, avg=11117.16, stdev=387.76 00:18:45.562 clat percentiles (usec): 00:18:45.562 | 1.00th=[10552], 5.00th=[10552], 10.00th=[10683], 20.00th=[10683], 00:18:45.562 | 30.00th=[10814], 40.00th=[10945], 50.00th=[10945], 60.00th=[11207], 00:18:45.562 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11600], 95.00th=[11731], 00:18:45.562 | 99.00th=[11994], 99.50th=[11994], 99.90th=[12649], 99.95th=[12649], 00:18:45.562 | 99.99th=[12649] 00:18:45.562 bw ( KiB/s): min=33792, max=34560, per=33.33%, avg=34483.20, stdev=242.86, samples=10 00:18:45.562 iops : min= 264, max= 270, avg=269.40, stdev= 1.90, samples=10 00:18:45.562 lat (msec) : 20=100.00% 00:18:45.562 cpu : usr=91.58%, sys=7.69%, ctx=10, majf=0, minf=9 00:18:45.562 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:45.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.562 issued rwts: total=1350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.562 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:45.562 filename0: (groupid=0, jobs=1): err= 0: pid=74909: Sat Dec 14 06:48:59 2024 00:18:45.562 read: IOPS=269, BW=33.7MiB/s (35.3MB/s)(169MiB/5011msec) 00:18:45.562 slat (nsec): min=6739, max=77923, avg=10570.47, stdev=5523.16 00:18:45.562 clat (usec): min=10330, max=12830, avg=11108.42, stdev=384.77 00:18:45.562 lat (usec): min=10338, max=12854, avg=11118.99, stdev=385.60 00:18:45.562 clat percentiles (usec): 00:18:45.562 | 1.00th=[10421], 5.00th=[10552], 10.00th=[10683], 20.00th=[10683], 00:18:45.562 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:18:45.562 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11600], 95.00th=[11731], 00:18:45.562 | 99.00th=[11994], 99.50th=[11994], 99.90th=[12780], 99.95th=[12780], 00:18:45.562 | 99.99th=[12780] 00:18:45.562 bw ( KiB/s): min=33792, max=34560, per=33.33%, avg=34483.20, stdev=242.86, samples=10 00:18:45.562 iops : min= 264, max= 270, avg=269.40, stdev= 1.90, samples=10 00:18:45.562 lat (msec) : 20=100.00% 00:18:45.562 cpu : usr=90.50%, sys=8.72%, ctx=43, majf=0, minf=0 00:18:45.562 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:45.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.562 issued rwts: total=1350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.562 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:45.562 filename0: (groupid=0, jobs=1): err= 0: pid=74910: Sat Dec 14 06:48:59 2024 00:18:45.562 read: IOPS=269, BW=33.7MiB/s (35.3MB/s)(169MiB/5008msec) 00:18:45.562 slat (nsec): min=6799, max=60352, avg=9770.21, stdev=4226.50 00:18:45.562 clat (usec): min=8141, max=13374, avg=11105.08, stdev=416.44 00:18:45.562 lat (usec): min=8150, max=13398, avg=11114.85, stdev=417.02 00:18:45.562 clat percentiles (usec): 00:18:45.562 | 1.00th=[10552], 5.00th=[10683], 10.00th=[10683], 20.00th=[10814], 00:18:45.562 | 30.00th=[10814], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:18:45.562 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11600], 95.00th=[11731], 00:18:45.562 | 99.00th=[11863], 99.50th=[11994], 99.90th=[13304], 99.95th=[13435], 00:18:45.562 | 99.99th=[13435] 00:18:45.562 bw ( KiB/s): min=33724, max=34560, per=33.33%, avg=34476.40, stdev=264.37, samples=10 00:18:45.562 iops : min= 263, max= 270, avg=269.30, stdev= 2.21, samples=10 00:18:45.562 lat (msec) : 10=0.22%, 20=99.78% 00:18:45.562 cpu : usr=91.73%, sys=7.61%, ctx=4, majf=0, minf=9 00:18:45.562 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:45.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.562 issued rwts: total=1350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.562 latency : target=0, window=0, percentile=100.00%, depth=3 00:18:45.562 00:18:45.562 Run status group 0 (all jobs): 00:18:45.562 READ: bw=101MiB/s (106MB/s), 33.7MiB/s-33.7MiB/s (35.3MB/s-35.3MB/s), io=506MiB (531MB), run=5008-5011msec 00:18:45.563 06:48:59 -- target/dif.sh@107 -- # destroy_subsystems 0 00:18:45.563 06:48:59 -- target/dif.sh@43 -- # local sub 00:18:45.563 06:48:59 -- target/dif.sh@45 -- # for sub in "$@" 00:18:45.563 06:48:59 -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:45.563 06:48:59 -- target/dif.sh@36 -- # local sub_id=0 00:18:45.563 06:48:59 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:45.563 06:48:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.563 06:48:59 -- common/autotest_common.sh@10 -- # set +x 00:18:45.822 06:48:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.822 06:48:59 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:45.822 06:48:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.822 06:48:59 -- common/autotest_common.sh@10 -- # set +x 00:18:45.822 06:48:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.822 06:48:59 -- target/dif.sh@109 -- # NULL_DIF=2 00:18:45.822 06:48:59 -- target/dif.sh@109 -- # bs=4k 00:18:45.822 06:48:59 -- target/dif.sh@109 -- # numjobs=8 00:18:45.822 06:48:59 -- target/dif.sh@109 -- # iodepth=16 00:18:45.822 06:48:59 -- target/dif.sh@109 -- # runtime= 00:18:45.822 06:48:59 -- target/dif.sh@109 -- # files=2 00:18:45.822 06:48:59 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:18:45.822 06:48:59 -- target/dif.sh@28 -- # local sub 00:18:45.822 06:48:59 -- target/dif.sh@30 -- # for sub in "$@" 00:18:45.822 06:48:59 -- target/dif.sh@31 -- # create_subsystem 0 00:18:45.822 06:48:59 -- target/dif.sh@18 -- # local sub_id=0 00:18:45.822 06:48:59 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:18:45.822 06:48:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.822 06:48:59 -- common/autotest_common.sh@10 -- # set +x 00:18:45.822 bdev_null0 00:18:45.822 06:48:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.822 06:48:59 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:45.822 06:48:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.822 06:48:59 -- common/autotest_common.sh@10 -- # set +x 00:18:45.822 06:48:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.822 06:48:59 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:45.822 06:48:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.822 06:48:59 -- common/autotest_common.sh@10 -- # set +x 00:18:45.822 06:48:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.822 06:48:59 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:45.822 06:48:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.822 06:48:59 -- common/autotest_common.sh@10 -- # set +x 00:18:45.822 [2024-12-14 06:48:59.601194] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.822 06:48:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.822 06:48:59 -- target/dif.sh@30 -- # for sub in "$@" 00:18:45.822 06:48:59 -- target/dif.sh@31 -- # create_subsystem 1 00:18:45.822 06:48:59 -- target/dif.sh@18 -- # local sub_id=1 00:18:45.822 06:48:59 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:18:45.822 06:48:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.822 06:48:59 -- common/autotest_common.sh@10 -- # set +x 00:18:45.822 bdev_null1 00:18:45.822 06:48:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.822 06:48:59 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:18:45.822 06:48:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.822 06:48:59 -- common/autotest_common.sh@10 -- # set +x 00:18:45.822 06:48:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.822 06:48:59 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:18:45.822 06:48:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.822 06:48:59 -- common/autotest_common.sh@10 -- # set +x 00:18:45.822 06:48:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.822 06:48:59 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:45.822 06:48:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.822 06:48:59 -- common/autotest_common.sh@10 -- # set +x 00:18:45.822 06:48:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.822 06:48:59 -- target/dif.sh@30 -- # for sub in "$@" 00:18:45.822 06:48:59 -- target/dif.sh@31 -- # create_subsystem 2 00:18:45.822 06:48:59 -- target/dif.sh@18 -- # local sub_id=2 00:18:45.822 06:48:59 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:18:45.822 06:48:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.822 06:48:59 -- common/autotest_common.sh@10 -- # set +x 00:18:45.822 bdev_null2 00:18:45.822 06:48:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.822 06:48:59 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:18:45.822 06:48:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.822 06:48:59 -- common/autotest_common.sh@10 -- # set +x 00:18:45.822 06:48:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.822 06:48:59 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:18:45.822 06:48:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.822 06:48:59 -- common/autotest_common.sh@10 -- # set +x 00:18:45.822 06:48:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.822 06:48:59 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:45.822 06:48:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.822 06:48:59 -- common/autotest_common.sh@10 -- # set +x 00:18:45.822 06:48:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.822 06:48:59 -- target/dif.sh@112 -- # fio /dev/fd/62 00:18:45.822 06:48:59 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:18:45.822 06:48:59 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:18:45.822 06:48:59 -- nvmf/common.sh@520 -- # config=() 00:18:45.822 06:48:59 -- nvmf/common.sh@520 -- # local subsystem config 00:18:45.822 06:48:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:45.822 06:48:59 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:45.822 06:48:59 -- target/dif.sh@82 -- # gen_fio_conf 00:18:45.822 06:48:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:45.822 { 00:18:45.822 "params": { 00:18:45.822 "name": "Nvme$subsystem", 00:18:45.822 "trtype": "$TEST_TRANSPORT", 00:18:45.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:45.822 "adrfam": "ipv4", 00:18:45.822 "trsvcid": "$NVMF_PORT", 00:18:45.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:45.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:45.822 "hdgst": ${hdgst:-false}, 00:18:45.822 "ddgst": ${ddgst:-false} 00:18:45.822 }, 00:18:45.822 "method": "bdev_nvme_attach_controller" 00:18:45.823 } 00:18:45.823 EOF 00:18:45.823 )") 00:18:45.823 06:48:59 -- target/dif.sh@54 -- # local file 00:18:45.823 06:48:59 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:45.823 06:48:59 -- target/dif.sh@56 -- # cat 00:18:45.823 06:48:59 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:18:45.823 06:48:59 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:45.823 06:48:59 -- nvmf/common.sh@542 -- # cat 00:18:45.823 06:48:59 -- common/autotest_common.sh@1328 -- # local sanitizers 00:18:45.823 06:48:59 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:45.823 06:48:59 -- common/autotest_common.sh@1330 -- # shift 00:18:45.823 06:48:59 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:18:45.823 06:48:59 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:45.823 06:48:59 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:45.823 06:48:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:45.823 06:48:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:45.823 { 00:18:45.823 "params": { 00:18:45.823 "name": "Nvme$subsystem", 00:18:45.823 "trtype": "$TEST_TRANSPORT", 00:18:45.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:45.823 "adrfam": "ipv4", 00:18:45.823 "trsvcid": "$NVMF_PORT", 00:18:45.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:45.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:45.823 "hdgst": ${hdgst:-false}, 00:18:45.823 "ddgst": ${ddgst:-false} 00:18:45.823 }, 00:18:45.823 "method": "bdev_nvme_attach_controller" 00:18:45.823 } 00:18:45.823 EOF 00:18:45.823 )") 00:18:45.823 06:48:59 -- common/autotest_common.sh@1334 -- # grep libasan 00:18:45.823 06:48:59 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:45.823 06:48:59 -- nvmf/common.sh@542 -- # cat 00:18:45.823 06:48:59 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:45.823 06:48:59 -- target/dif.sh@72 -- # (( file <= files )) 00:18:45.823 06:48:59 -- target/dif.sh@73 -- # cat 00:18:45.823 06:48:59 -- target/dif.sh@72 -- # (( file++ )) 00:18:45.823 06:48:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:45.823 06:48:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:45.823 { 00:18:45.823 "params": { 00:18:45.823 "name": "Nvme$subsystem", 00:18:45.823 "trtype": "$TEST_TRANSPORT", 00:18:45.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:45.823 "adrfam": "ipv4", 00:18:45.823 "trsvcid": "$NVMF_PORT", 00:18:45.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:45.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:45.823 "hdgst": ${hdgst:-false}, 00:18:45.823 "ddgst": ${ddgst:-false} 00:18:45.823 }, 00:18:45.823 "method": "bdev_nvme_attach_controller" 00:18:45.823 } 00:18:45.823 EOF 00:18:45.823 )") 00:18:45.823 06:48:59 -- target/dif.sh@72 -- # (( file <= files )) 00:18:45.823 06:48:59 -- target/dif.sh@73 -- # cat 00:18:45.823 06:48:59 -- nvmf/common.sh@542 -- # cat 00:18:45.823 06:48:59 -- target/dif.sh@72 -- # (( file++ )) 00:18:45.823 06:48:59 -- target/dif.sh@72 -- # (( file <= files )) 00:18:45.823 06:48:59 -- nvmf/common.sh@544 -- # jq . 00:18:45.823 06:48:59 -- nvmf/common.sh@545 -- # IFS=, 00:18:45.823 06:48:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:45.823 "params": { 00:18:45.823 "name": "Nvme0", 00:18:45.823 "trtype": "tcp", 00:18:45.823 "traddr": "10.0.0.2", 00:18:45.823 "adrfam": "ipv4", 00:18:45.823 "trsvcid": "4420", 00:18:45.823 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:45.823 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:45.823 "hdgst": false, 00:18:45.823 "ddgst": false 00:18:45.823 }, 00:18:45.823 "method": "bdev_nvme_attach_controller" 00:18:45.823 },{ 00:18:45.823 "params": { 00:18:45.823 "name": "Nvme1", 00:18:45.823 "trtype": "tcp", 00:18:45.823 "traddr": "10.0.0.2", 00:18:45.823 "adrfam": "ipv4", 00:18:45.823 "trsvcid": "4420", 00:18:45.823 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:45.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:45.823 "hdgst": false, 00:18:45.823 "ddgst": false 00:18:45.823 }, 00:18:45.823 "method": "bdev_nvme_attach_controller" 00:18:45.823 },{ 00:18:45.823 "params": { 00:18:45.823 "name": "Nvme2", 00:18:45.823 "trtype": "tcp", 00:18:45.823 "traddr": "10.0.0.2", 00:18:45.823 "adrfam": "ipv4", 00:18:45.823 "trsvcid": "4420", 00:18:45.823 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:45.823 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:45.823 "hdgst": false, 00:18:45.823 "ddgst": false 00:18:45.823 }, 00:18:45.823 "method": "bdev_nvme_attach_controller" 00:18:45.823 }' 00:18:45.823 06:48:59 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:45.823 06:48:59 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:45.823 06:48:59 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:45.823 06:48:59 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:45.823 06:48:59 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:18:45.823 06:48:59 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:45.823 06:48:59 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:45.823 06:48:59 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:45.823 06:48:59 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:45.823 06:48:59 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:46.082 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:18:46.082 ... 00:18:46.082 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:18:46.082 ... 00:18:46.082 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:18:46.082 ... 00:18:46.082 fio-3.35 00:18:46.082 Starting 24 threads 00:18:46.648 [2024-12-14 06:49:00.350659] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:46.648 [2024-12-14 06:49:00.350742] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:56.610 00:18:56.610 filename0: (groupid=0, jobs=1): err= 0: pid=75005: Sat Dec 14 06:49:10 2024 00:18:56.610 read: IOPS=228, BW=914KiB/s (936kB/s)(9168KiB/10032msec) 00:18:56.610 slat (usec): min=4, max=8028, avg=30.92, stdev=374.50 00:18:56.610 clat (msec): min=32, max=131, avg=69.85, stdev=15.13 00:18:56.610 lat (msec): min=32, max=131, avg=69.88, stdev=15.13 00:18:56.610 clat percentiles (msec): 00:18:56.610 | 1.00th=[ 38], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:18:56.610 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 72], 00:18:56.610 | 70.00th=[ 73], 80.00th=[ 82], 90.00th=[ 91], 95.00th=[ 99], 00:18:56.610 | 99.00th=[ 112], 99.50th=[ 118], 99.90th=[ 132], 99.95th=[ 132], 00:18:56.610 | 99.99th=[ 132] 00:18:56.610 bw ( KiB/s): min= 832, max= 1024, per=3.88%, avg=911.80, stdev=52.96, samples=20 00:18:56.610 iops : min= 208, max= 256, avg=227.90, stdev=13.19, samples=20 00:18:56.610 lat (msec) : 50=12.43%, 100=84.25%, 250=3.32% 00:18:56.610 cpu : usr=32.66%, sys=1.65%, ctx=914, majf=0, minf=9 00:18:56.610 IO depths : 1=0.1%, 2=0.7%, 4=2.7%, 8=79.5%, 16=17.0%, 32=0.0%, >=64=0.0% 00:18:56.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.610 complete : 0=0.0%, 4=88.7%, 8=10.7%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.610 issued rwts: total=2292,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.610 filename0: (groupid=0, jobs=1): err= 0: pid=75006: Sat Dec 14 06:49:10 2024 00:18:56.610 read: IOPS=239, BW=957KiB/s (980kB/s)(9600KiB/10031msec) 00:18:56.610 slat (usec): min=4, max=8037, avg=31.95, stdev=326.90 00:18:56.610 clat (msec): min=13, max=131, avg=66.66, stdev=16.49 00:18:56.610 lat (msec): min=13, max=131, avg=66.69, stdev=16.49 00:18:56.610 clat percentiles (msec): 00:18:56.610 | 1.00th=[ 34], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 51], 00:18:56.610 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 69], 60.00th=[ 72], 00:18:56.610 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 88], 95.00th=[ 96], 00:18:56.610 | 99.00th=[ 109], 99.50th=[ 112], 99.90th=[ 114], 99.95th=[ 116], 00:18:56.610 | 99.99th=[ 132] 00:18:56.610 bw ( KiB/s): min= 768, max= 1149, per=4.07%, avg=955.25, stdev=83.16, samples=20 00:18:56.610 iops : min= 192, max= 287, avg=238.75, stdev=20.74, samples=20 00:18:56.610 lat (msec) : 20=0.58%, 50=19.92%, 100=76.92%, 250=2.58% 00:18:56.610 cpu : usr=37.51%, sys=2.08%, ctx=1079, majf=0, minf=9 00:18:56.610 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=79.9%, 16=16.3%, 32=0.0%, >=64=0.0% 00:18:56.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.610 complete : 0=0.0%, 4=88.4%, 8=11.0%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.610 issued rwts: total=2400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.610 filename0: (groupid=0, jobs=1): err= 0: pid=75007: Sat Dec 14 06:49:10 2024 00:18:56.610 read: IOPS=250, BW=1000KiB/s (1024kB/s)(9.78MiB/10012msec) 00:18:56.610 slat (usec): min=8, max=8023, avg=28.57, stdev=279.94 00:18:56.610 clat (msec): min=14, max=116, avg=63.85, stdev=16.27 00:18:56.610 lat (msec): min=14, max=116, avg=63.88, stdev=16.27 00:18:56.610 clat percentiles (msec): 00:18:56.610 | 1.00th=[ 33], 5.00th=[ 40], 10.00th=[ 45], 20.00th=[ 48], 00:18:56.610 | 30.00th=[ 54], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 70], 00:18:56.610 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 83], 95.00th=[ 95], 00:18:56.610 | 99.00th=[ 107], 99.50th=[ 109], 99.90th=[ 114], 99.95th=[ 116], 00:18:56.610 | 99.99th=[ 116] 00:18:56.610 bw ( KiB/s): min= 912, max= 1056, per=4.25%, avg=997.30, stdev=40.51, samples=20 00:18:56.610 iops : min= 228, max= 264, avg=249.30, stdev=10.18, samples=20 00:18:56.610 lat (msec) : 20=0.28%, 50=27.20%, 100=69.73%, 250=2.80% 00:18:56.610 cpu : usr=40.30%, sys=1.90%, ctx=1240, majf=0, minf=9 00:18:56.610 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=83.2%, 16=16.2%, 32=0.0%, >=64=0.0% 00:18:56.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.610 complete : 0=0.0%, 4=87.2%, 8=12.7%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.610 issued rwts: total=2504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.610 filename0: (groupid=0, jobs=1): err= 0: pid=75008: Sat Dec 14 06:49:10 2024 00:18:56.610 read: IOPS=243, BW=972KiB/s (996kB/s)(9764KiB/10041msec) 00:18:56.610 slat (usec): min=6, max=8028, avg=20.16, stdev=229.35 00:18:56.610 clat (msec): min=3, max=135, avg=65.63, stdev=17.81 00:18:56.610 lat (msec): min=3, max=135, avg=65.65, stdev=17.81 00:18:56.610 clat percentiles (msec): 00:18:56.610 | 1.00th=[ 5], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 48], 00:18:56.610 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 69], 60.00th=[ 72], 00:18:56.610 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 85], 95.00th=[ 96], 00:18:56.610 | 99.00th=[ 108], 99.50th=[ 108], 99.90th=[ 121], 99.95th=[ 122], 00:18:56.611 | 99.99th=[ 136] 00:18:56.611 bw ( KiB/s): min= 832, max= 1397, per=4.14%, avg=972.25, stdev=115.48, samples=20 00:18:56.611 iops : min= 208, max= 349, avg=243.05, stdev=28.82, samples=20 00:18:56.611 lat (msec) : 4=0.66%, 10=0.66%, 20=1.23%, 50=19.01%, 100=75.26% 00:18:56.611 lat (msec) : 250=3.20% 00:18:56.611 cpu : usr=34.32%, sys=1.96%, ctx=996, majf=0, minf=0 00:18:56.611 IO depths : 1=0.2%, 2=0.6%, 4=1.8%, 8=80.9%, 16=16.6%, 32=0.0%, >=64=0.0% 00:18:56.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.611 complete : 0=0.0%, 4=88.2%, 8=11.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.611 issued rwts: total=2441,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.611 filename0: (groupid=0, jobs=1): err= 0: pid=75009: Sat Dec 14 06:49:10 2024 00:18:56.611 read: IOPS=244, BW=980KiB/s (1003kB/s)(9804KiB/10007msec) 00:18:56.611 slat (usec): min=3, max=12027, avg=33.43, stdev=404.80 00:18:56.611 clat (msec): min=9, max=123, avg=65.17, stdev=17.12 00:18:56.611 lat (msec): min=9, max=123, avg=65.20, stdev=17.12 00:18:56.611 clat percentiles (msec): 00:18:56.611 | 1.00th=[ 28], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 48], 00:18:56.611 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 72], 00:18:56.611 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 85], 95.00th=[ 96], 00:18:56.611 | 99.00th=[ 114], 99.50th=[ 124], 99.90th=[ 124], 99.95th=[ 124], 00:18:56.611 | 99.99th=[ 124] 00:18:56.611 bw ( KiB/s): min= 768, max= 1128, per=4.14%, avg=972.47, stdev=90.54, samples=19 00:18:56.611 iops : min= 192, max= 282, avg=243.05, stdev=22.59, samples=19 00:18:56.611 lat (msec) : 10=0.24%, 20=0.53%, 50=26.23%, 100=70.18%, 250=2.82% 00:18:56.611 cpu : usr=35.78%, sys=1.91%, ctx=1026, majf=0, minf=9 00:18:56.611 IO depths : 1=0.1%, 2=0.7%, 4=2.7%, 8=80.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:18:56.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.611 complete : 0=0.0%, 4=87.8%, 8=11.6%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.611 issued rwts: total=2451,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.611 filename0: (groupid=0, jobs=1): err= 0: pid=75010: Sat Dec 14 06:49:10 2024 00:18:56.611 read: IOPS=240, BW=964KiB/s (987kB/s)(9676KiB/10041msec) 00:18:56.611 slat (usec): min=4, max=8030, avg=30.11, stdev=364.02 00:18:56.611 clat (msec): min=8, max=132, avg=66.23, stdev=16.78 00:18:56.611 lat (msec): min=8, max=132, avg=66.26, stdev=16.78 00:18:56.611 clat percentiles (msec): 00:18:56.611 | 1.00th=[ 12], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 49], 00:18:56.611 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 70], 60.00th=[ 72], 00:18:56.611 | 70.00th=[ 72], 80.00th=[ 77], 90.00th=[ 85], 95.00th=[ 96], 00:18:56.611 | 99.00th=[ 108], 99.50th=[ 110], 99.90th=[ 112], 99.95th=[ 112], 00:18:56.611 | 99.99th=[ 132] 00:18:56.611 bw ( KiB/s): min= 872, max= 1266, per=4.09%, avg=961.80, stdev=84.54, samples=20 00:18:56.611 iops : min= 218, max= 316, avg=240.40, stdev=21.03, samples=20 00:18:56.611 lat (msec) : 10=0.66%, 20=0.66%, 50=20.34%, 100=76.19%, 250=2.15% 00:18:56.611 cpu : usr=31.54%, sys=1.74%, ctx=843, majf=0, minf=9 00:18:56.611 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=81.6%, 16=16.8%, 32=0.0%, >=64=0.0% 00:18:56.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.611 complete : 0=0.0%, 4=88.0%, 8=11.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.611 issued rwts: total=2419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.611 filename0: (groupid=0, jobs=1): err= 0: pid=75011: Sat Dec 14 06:49:10 2024 00:18:56.611 read: IOPS=241, BW=967KiB/s (990kB/s)(9696KiB/10025msec) 00:18:56.611 slat (usec): min=4, max=8025, avg=36.93, stdev=389.59 00:18:56.611 clat (msec): min=29, max=131, avg=65.96, stdev=16.19 00:18:56.611 lat (msec): min=29, max=131, avg=65.99, stdev=16.19 00:18:56.611 clat percentiles (msec): 00:18:56.611 | 1.00th=[ 36], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 48], 00:18:56.611 | 30.00th=[ 59], 40.00th=[ 62], 50.00th=[ 68], 60.00th=[ 72], 00:18:56.611 | 70.00th=[ 72], 80.00th=[ 78], 90.00th=[ 85], 95.00th=[ 96], 00:18:56.611 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 118], 99.95th=[ 127], 00:18:56.611 | 99.99th=[ 132] 00:18:56.611 bw ( KiB/s): min= 864, max= 1104, per=4.11%, avg=965.60, stdev=63.62, samples=20 00:18:56.611 iops : min= 216, max= 276, avg=241.35, stdev=15.92, samples=20 00:18:56.611 lat (msec) : 50=23.56%, 100=73.23%, 250=3.22% 00:18:56.611 cpu : usr=35.65%, sys=1.89%, ctx=1037, majf=0, minf=9 00:18:56.611 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=81.8%, 16=16.5%, 32=0.0%, >=64=0.0% 00:18:56.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.611 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.611 issued rwts: total=2424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.611 filename0: (groupid=0, jobs=1): err= 0: pid=75012: Sat Dec 14 06:49:10 2024 00:18:56.611 read: IOPS=251, BW=1008KiB/s (1032kB/s)(9.85MiB/10004msec) 00:18:56.611 slat (usec): min=4, max=4025, avg=16.46, stdev=80.04 00:18:56.611 clat (msec): min=5, max=126, avg=63.40, stdev=17.11 00:18:56.611 lat (msec): min=5, max=126, avg=63.41, stdev=17.11 00:18:56.611 clat percentiles (msec): 00:18:56.611 | 1.00th=[ 11], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 48], 00:18:56.611 | 30.00th=[ 51], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 71], 00:18:56.611 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 85], 95.00th=[ 95], 00:18:56.611 | 99.00th=[ 107], 99.50th=[ 110], 99.90th=[ 118], 99.95th=[ 127], 00:18:56.611 | 99.99th=[ 127] 00:18:56.611 bw ( KiB/s): min= 889, max= 1080, per=4.25%, avg=998.37, stdev=52.66, samples=19 00:18:56.611 iops : min= 222, max= 270, avg=249.53, stdev=13.19, samples=19 00:18:56.611 lat (msec) : 10=0.79%, 20=0.36%, 50=27.73%, 100=68.94%, 250=2.18% 00:18:56.611 cpu : usr=34.68%, sys=1.71%, ctx=981, majf=0, minf=9 00:18:56.611 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.9%, 16=16.1%, 32=0.0%, >=64=0.0% 00:18:56.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.611 complete : 0=0.0%, 4=87.3%, 8=12.6%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.611 issued rwts: total=2521,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.611 filename1: (groupid=0, jobs=1): err= 0: pid=75013: Sat Dec 14 06:49:10 2024 00:18:56.611 read: IOPS=239, BW=960KiB/s (983kB/s)(9636KiB/10042msec) 00:18:56.611 slat (usec): min=4, max=4021, avg=18.45, stdev=127.62 00:18:56.611 clat (msec): min=6, max=126, avg=66.53, stdev=17.14 00:18:56.611 lat (msec): min=6, max=126, avg=66.54, stdev=17.14 00:18:56.611 clat percentiles (msec): 00:18:56.611 | 1.00th=[ 12], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 52], 00:18:56.611 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 69], 60.00th=[ 72], 00:18:56.611 | 70.00th=[ 72], 80.00th=[ 78], 90.00th=[ 86], 95.00th=[ 97], 00:18:56.611 | 99.00th=[ 108], 99.50th=[ 111], 99.90th=[ 120], 99.95th=[ 121], 00:18:56.611 | 99.99th=[ 127] 00:18:56.611 bw ( KiB/s): min= 816, max= 1338, per=4.08%, avg=959.70, stdev=104.64, samples=20 00:18:56.611 iops : min= 204, max= 334, avg=239.90, stdev=26.06, samples=20 00:18:56.611 lat (msec) : 10=0.66%, 20=1.33%, 50=16.56%, 100=77.96%, 250=3.49% 00:18:56.611 cpu : usr=41.23%, sys=2.16%, ctx=1412, majf=0, minf=9 00:18:56.611 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=81.2%, 16=16.9%, 32=0.0%, >=64=0.0% 00:18:56.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.611 complete : 0=0.0%, 4=88.2%, 8=11.5%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.611 issued rwts: total=2409,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.611 filename1: (groupid=0, jobs=1): err= 0: pid=75014: Sat Dec 14 06:49:10 2024 00:18:56.611 read: IOPS=246, BW=988KiB/s (1011kB/s)(9920KiB/10044msec) 00:18:56.611 slat (usec): min=4, max=5023, avg=17.55, stdev=142.31 00:18:56.611 clat (msec): min=7, max=113, avg=64.67, stdev=16.97 00:18:56.611 lat (msec): min=7, max=113, avg=64.68, stdev=16.97 00:18:56.611 clat percentiles (msec): 00:18:56.611 | 1.00th=[ 12], 5.00th=[ 42], 10.00th=[ 46], 20.00th=[ 48], 00:18:56.611 | 30.00th=[ 56], 40.00th=[ 63], 50.00th=[ 66], 60.00th=[ 70], 00:18:56.611 | 70.00th=[ 72], 80.00th=[ 77], 90.00th=[ 85], 95.00th=[ 96], 00:18:56.611 | 99.00th=[ 106], 99.50th=[ 108], 99.90th=[ 110], 99.95th=[ 110], 00:18:56.611 | 99.99th=[ 114] 00:18:56.611 bw ( KiB/s): min= 840, max= 1380, per=4.19%, avg=984.70, stdev=109.26, samples=20 00:18:56.611 iops : min= 210, max= 345, avg=246.15, stdev=27.32, samples=20 00:18:56.611 lat (msec) : 10=0.65%, 20=1.21%, 50=22.06%, 100=73.55%, 250=2.54% 00:18:56.611 cpu : usr=42.59%, sys=2.45%, ctx=1464, majf=0, minf=9 00:18:56.611 IO depths : 1=0.1%, 2=0.8%, 4=3.2%, 8=79.9%, 16=16.0%, 32=0.0%, >=64=0.0% 00:18:56.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.611 complete : 0=0.0%, 4=88.2%, 8=11.1%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.611 issued rwts: total=2480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.611 filename1: (groupid=0, jobs=1): err= 0: pid=75015: Sat Dec 14 06:49:10 2024 00:18:56.611 read: IOPS=248, BW=994KiB/s (1018kB/s)(9964KiB/10023msec) 00:18:56.611 slat (usec): min=6, max=8029, avg=29.50, stdev=306.12 00:18:56.611 clat (msec): min=24, max=115, avg=64.22, stdev=16.10 00:18:56.611 lat (msec): min=24, max=115, avg=64.25, stdev=16.10 00:18:56.611 clat percentiles (msec): 00:18:56.611 | 1.00th=[ 36], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 48], 00:18:56.611 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 71], 00:18:56.611 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 85], 95.00th=[ 96], 00:18:56.611 | 99.00th=[ 108], 99.50th=[ 110], 99.90th=[ 114], 99.95th=[ 115], 00:18:56.611 | 99.99th=[ 115] 00:18:56.611 bw ( KiB/s): min= 784, max= 1080, per=4.22%, avg=990.00, stdev=69.59, samples=20 00:18:56.611 iops : min= 196, max= 270, avg=247.50, stdev=17.40, samples=20 00:18:56.611 lat (msec) : 50=28.26%, 100=69.69%, 250=2.05% 00:18:56.611 cpu : usr=37.04%, sys=1.81%, ctx=1018, majf=0, minf=9 00:18:56.611 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.4%, 16=16.1%, 32=0.0%, >=64=0.0% 00:18:56.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.611 complete : 0=0.0%, 4=87.4%, 8=12.3%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.611 issued rwts: total=2491,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.611 filename1: (groupid=0, jobs=1): err= 0: pid=75016: Sat Dec 14 06:49:10 2024 00:18:56.611 read: IOPS=242, BW=970KiB/s (993kB/s)(9720KiB/10022msec) 00:18:56.611 slat (usec): min=4, max=7608, avg=19.81, stdev=174.25 00:18:56.611 clat (msec): min=30, max=131, avg=65.84, stdev=16.84 00:18:56.611 lat (msec): min=30, max=131, avg=65.86, stdev=16.83 00:18:56.611 clat percentiles (msec): 00:18:56.611 | 1.00th=[ 37], 5.00th=[ 42], 10.00th=[ 45], 20.00th=[ 48], 00:18:56.611 | 30.00th=[ 55], 40.00th=[ 63], 50.00th=[ 67], 60.00th=[ 70], 00:18:56.612 | 70.00th=[ 73], 80.00th=[ 79], 90.00th=[ 86], 95.00th=[ 96], 00:18:56.612 | 99.00th=[ 113], 99.50th=[ 116], 99.90th=[ 131], 99.95th=[ 131], 00:18:56.612 | 99.99th=[ 132] 00:18:56.612 bw ( KiB/s): min= 752, max= 1096, per=4.12%, avg=968.00, stdev=76.47, samples=20 00:18:56.612 iops : min= 188, max= 274, avg=242.00, stdev=19.12, samples=20 00:18:56.612 lat (msec) : 50=23.95%, 100=72.26%, 250=3.79% 00:18:56.612 cpu : usr=42.14%, sys=2.18%, ctx=1396, majf=0, minf=9 00:18:56.612 IO depths : 1=0.1%, 2=0.7%, 4=2.6%, 8=80.8%, 16=15.9%, 32=0.0%, >=64=0.0% 00:18:56.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.612 complete : 0=0.0%, 4=87.8%, 8=11.6%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.612 issued rwts: total=2430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.612 filename1: (groupid=0, jobs=1): err= 0: pid=75017: Sat Dec 14 06:49:10 2024 00:18:56.612 read: IOPS=241, BW=966KiB/s (989kB/s)(9696KiB/10041msec) 00:18:56.612 slat (usec): min=4, max=8031, avg=21.43, stdev=202.28 00:18:56.612 clat (msec): min=17, max=131, avg=66.13, stdev=16.13 00:18:56.612 lat (msec): min=17, max=131, avg=66.15, stdev=16.13 00:18:56.612 clat percentiles (msec): 00:18:56.612 | 1.00th=[ 32], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 49], 00:18:56.612 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 70], 60.00th=[ 72], 00:18:56.612 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 85], 95.00th=[ 96], 00:18:56.612 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 121], 99.95th=[ 124], 00:18:56.612 | 99.99th=[ 132] 00:18:56.612 bw ( KiB/s): min= 808, max= 1269, per=4.10%, avg=962.70, stdev=97.00, samples=20 00:18:56.612 iops : min= 202, max= 317, avg=240.65, stdev=24.20, samples=20 00:18:56.612 lat (msec) : 20=0.66%, 50=20.71%, 100=75.99%, 250=2.64% 00:18:56.612 cpu : usr=34.18%, sys=1.77%, ctx=1061, majf=0, minf=9 00:18:56.612 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=82.2%, 16=16.9%, 32=0.0%, >=64=0.0% 00:18:56.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.612 complete : 0=0.0%, 4=87.8%, 8=12.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.612 issued rwts: total=2424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.612 filename1: (groupid=0, jobs=1): err= 0: pid=75018: Sat Dec 14 06:49:10 2024 00:18:56.612 read: IOPS=242, BW=970KiB/s (993kB/s)(9740KiB/10040msec) 00:18:56.612 slat (usec): min=4, max=8025, avg=24.67, stdev=281.07 00:18:56.612 clat (msec): min=7, max=132, avg=65.82, stdev=17.12 00:18:56.612 lat (msec): min=7, max=132, avg=65.84, stdev=17.13 00:18:56.612 clat percentiles (msec): 00:18:56.612 | 1.00th=[ 12], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 48], 00:18:56.612 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 69], 60.00th=[ 72], 00:18:56.612 | 70.00th=[ 72], 80.00th=[ 78], 90.00th=[ 85], 95.00th=[ 96], 00:18:56.612 | 99.00th=[ 108], 99.50th=[ 108], 99.90th=[ 121], 99.95th=[ 121], 00:18:56.612 | 99.99th=[ 133] 00:18:56.612 bw ( KiB/s): min= 840, max= 1216, per=4.12%, avg=967.30, stdev=88.39, samples=20 00:18:56.612 iops : min= 210, max= 304, avg=241.80, stdev=22.10, samples=20 00:18:56.612 lat (msec) : 10=0.57%, 20=0.74%, 50=21.27%, 100=74.58%, 250=2.83% 00:18:56.612 cpu : usr=34.60%, sys=1.95%, ctx=997, majf=0, minf=9 00:18:56.612 IO depths : 1=0.1%, 2=0.4%, 4=1.3%, 8=81.6%, 16=16.7%, 32=0.0%, >=64=0.0% 00:18:56.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.612 complete : 0=0.0%, 4=87.9%, 8=11.8%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.612 issued rwts: total=2435,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.612 filename1: (groupid=0, jobs=1): err= 0: pid=75019: Sat Dec 14 06:49:10 2024 00:18:56.612 read: IOPS=245, BW=982KiB/s (1006kB/s)(9852KiB/10031msec) 00:18:56.612 slat (usec): min=4, max=8024, avg=21.05, stdev=228.24 00:18:56.612 clat (msec): min=24, max=131, avg=65.03, stdev=16.31 00:18:56.612 lat (msec): min=24, max=131, avg=65.05, stdev=16.32 00:18:56.612 clat percentiles (msec): 00:18:56.612 | 1.00th=[ 33], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 48], 00:18:56.612 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 72], 00:18:56.612 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 85], 95.00th=[ 96], 00:18:56.612 | 99.00th=[ 108], 99.50th=[ 108], 99.90th=[ 115], 99.95th=[ 116], 00:18:56.612 | 99.99th=[ 132] 00:18:56.612 bw ( KiB/s): min= 840, max= 1186, per=4.17%, avg=980.65, stdev=77.67, samples=20 00:18:56.612 iops : min= 210, max= 296, avg=245.10, stdev=19.36, samples=20 00:18:56.612 lat (msec) : 50=25.25%, 100=71.78%, 250=2.96% 00:18:56.612 cpu : usr=33.43%, sys=1.87%, ctx=964, majf=0, minf=9 00:18:56.612 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=82.8%, 16=16.6%, 32=0.0%, >=64=0.0% 00:18:56.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.612 complete : 0=0.0%, 4=87.5%, 8=12.4%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.612 issued rwts: total=2463,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.612 filename1: (groupid=0, jobs=1): err= 0: pid=75020: Sat Dec 14 06:49:10 2024 00:18:56.612 read: IOPS=247, BW=988KiB/s (1012kB/s)(9912KiB/10031msec) 00:18:56.612 slat (usec): min=4, max=8061, avg=21.03, stdev=191.90 00:18:56.612 clat (msec): min=29, max=128, avg=64.62, stdev=16.26 00:18:56.612 lat (msec): min=29, max=128, avg=64.64, stdev=16.26 00:18:56.612 clat percentiles (msec): 00:18:56.612 | 1.00th=[ 35], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 48], 00:18:56.612 | 30.00th=[ 54], 40.00th=[ 61], 50.00th=[ 66], 60.00th=[ 70], 00:18:56.612 | 70.00th=[ 72], 80.00th=[ 77], 90.00th=[ 85], 95.00th=[ 96], 00:18:56.612 | 99.00th=[ 107], 99.50th=[ 108], 99.90th=[ 108], 99.95th=[ 108], 00:18:56.612 | 99.99th=[ 129] 00:18:56.612 bw ( KiB/s): min= 840, max= 1096, per=4.20%, avg=986.75, stdev=66.13, samples=20 00:18:56.612 iops : min= 210, max= 274, avg=246.60, stdev=16.51, samples=20 00:18:56.612 lat (msec) : 50=25.83%, 100=70.86%, 250=3.31% 00:18:56.612 cpu : usr=38.35%, sys=2.28%, ctx=1396, majf=0, minf=9 00:18:56.612 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=83.0%, 16=16.4%, 32=0.0%, >=64=0.0% 00:18:56.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.612 complete : 0=0.0%, 4=87.3%, 8=12.5%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.612 issued rwts: total=2478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.612 filename2: (groupid=0, jobs=1): err= 0: pid=75021: Sat Dec 14 06:49:10 2024 00:18:56.612 read: IOPS=241, BW=967KiB/s (990kB/s)(9692KiB/10024msec) 00:18:56.612 slat (usec): min=4, max=8032, avg=26.77, stdev=269.94 00:18:56.612 clat (msec): min=26, max=132, avg=65.98, stdev=16.06 00:18:56.612 lat (msec): min=26, max=132, avg=66.01, stdev=16.06 00:18:56.612 clat percentiles (msec): 00:18:56.612 | 1.00th=[ 39], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 48], 00:18:56.612 | 30.00th=[ 57], 40.00th=[ 63], 50.00th=[ 66], 60.00th=[ 71], 00:18:56.612 | 70.00th=[ 72], 80.00th=[ 78], 90.00th=[ 87], 95.00th=[ 96], 00:18:56.612 | 99.00th=[ 108], 99.50th=[ 113], 99.90th=[ 124], 99.95th=[ 129], 00:18:56.612 | 99.99th=[ 132] 00:18:56.612 bw ( KiB/s): min= 768, max= 1072, per=4.10%, avg=962.85, stdev=68.88, samples=20 00:18:56.612 iops : min= 192, max= 268, avg=240.70, stdev=17.23, samples=20 00:18:56.612 lat (msec) : 50=23.52%, 100=73.79%, 250=2.68% 00:18:56.612 cpu : usr=41.21%, sys=2.39%, ctx=1222, majf=0, minf=9 00:18:56.612 IO depths : 1=0.1%, 2=0.7%, 4=2.6%, 8=80.6%, 16=16.1%, 32=0.0%, >=64=0.0% 00:18:56.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.612 complete : 0=0.0%, 4=87.9%, 8=11.5%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.612 issued rwts: total=2423,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.612 filename2: (groupid=0, jobs=1): err= 0: pid=75022: Sat Dec 14 06:49:10 2024 00:18:56.612 read: IOPS=236, BW=948KiB/s (971kB/s)(9508KiB/10032msec) 00:18:56.612 slat (usec): min=5, max=8022, avg=19.85, stdev=183.76 00:18:56.612 clat (msec): min=29, max=131, avg=67.39, stdev=15.46 00:18:56.612 lat (msec): min=29, max=131, avg=67.41, stdev=15.46 00:18:56.612 clat percentiles (msec): 00:18:56.612 | 1.00th=[ 39], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 55], 00:18:56.612 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 68], 60.00th=[ 72], 00:18:56.612 | 70.00th=[ 72], 80.00th=[ 79], 90.00th=[ 87], 95.00th=[ 96], 00:18:56.612 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 112], 99.95th=[ 121], 00:18:56.612 | 99.99th=[ 132] 00:18:56.612 bw ( KiB/s): min= 848, max= 1111, per=4.03%, avg=945.70, stdev=67.73, samples=20 00:18:56.612 iops : min= 212, max= 277, avg=236.35, stdev=16.83, samples=20 00:18:56.612 lat (msec) : 50=17.80%, 100=79.18%, 250=3.03% 00:18:56.612 cpu : usr=40.61%, sys=2.12%, ctx=1284, majf=0, minf=9 00:18:56.612 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.2%, 16=17.1%, 32=0.0%, >=64=0.0% 00:18:56.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.612 complete : 0=0.0%, 4=87.9%, 8=12.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.612 issued rwts: total=2377,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.612 filename2: (groupid=0, jobs=1): err= 0: pid=75023: Sat Dec 14 06:49:10 2024 00:18:56.612 read: IOPS=259, BW=1038KiB/s (1063kB/s)(10.1MiB/10001msec) 00:18:56.612 slat (usec): min=3, max=8025, avg=26.17, stdev=289.31 00:18:56.612 clat (usec): min=735, max=125444, avg=61559.92, stdev=19582.37 00:18:56.612 lat (usec): min=742, max=125455, avg=61586.08, stdev=19584.34 00:18:56.612 clat percentiles (usec): 00:18:56.612 | 1.00th=[ 1549], 5.00th=[ 35914], 10.00th=[ 44827], 20.00th=[ 47973], 00:18:56.612 | 30.00th=[ 47973], 40.00th=[ 59507], 50.00th=[ 61080], 60.00th=[ 70779], 00:18:56.612 | 70.00th=[ 71828], 80.00th=[ 72877], 90.00th=[ 83362], 95.00th=[ 95945], 00:18:56.612 | 99.00th=[108528], 99.50th=[109577], 99.90th=[119014], 99.95th=[125305], 00:18:56.612 | 99.99th=[125305] 00:18:56.612 bw ( KiB/s): min= 768, max= 1104, per=4.26%, avg=1000.84, stdev=73.80, samples=19 00:18:56.612 iops : min= 192, max= 276, avg=250.21, stdev=18.45, samples=19 00:18:56.612 lat (usec) : 750=0.08%, 1000=0.04% 00:18:56.612 lat (msec) : 2=1.50%, 4=0.50%, 10=1.12%, 20=0.35%, 50=32.01% 00:18:56.612 lat (msec) : 100=61.94%, 250=2.47% 00:18:56.612 cpu : usr=32.34%, sys=2.02%, ctx=902, majf=0, minf=9 00:18:56.612 IO depths : 1=0.1%, 2=0.7%, 4=2.4%, 8=81.3%, 16=15.5%, 32=0.0%, >=64=0.0% 00:18:56.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.612 complete : 0=0.0%, 4=87.5%, 8=12.0%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.612 issued rwts: total=2596,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.612 filename2: (groupid=0, jobs=1): err= 0: pid=75024: Sat Dec 14 06:49:10 2024 00:18:56.612 read: IOPS=255, BW=1020KiB/s (1045kB/s)(9.97MiB/10006msec) 00:18:56.612 slat (usec): min=4, max=4032, avg=17.59, stdev=79.68 00:18:56.612 clat (msec): min=6, max=115, avg=62.65, stdev=17.25 00:18:56.612 lat (msec): min=6, max=115, avg=62.67, stdev=17.25 00:18:56.612 clat percentiles (msec): 00:18:56.613 | 1.00th=[ 15], 5.00th=[ 41], 10.00th=[ 44], 20.00th=[ 47], 00:18:56.613 | 30.00th=[ 50], 40.00th=[ 57], 50.00th=[ 64], 60.00th=[ 69], 00:18:56.613 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 84], 95.00th=[ 95], 00:18:56.613 | 99.00th=[ 107], 99.50th=[ 113], 99.90th=[ 115], 99.95th=[ 115], 00:18:56.613 | 99.99th=[ 115] 00:18:56.613 bw ( KiB/s): min= 768, max= 1123, per=4.31%, avg=1011.00, stdev=76.25, samples=19 00:18:56.613 iops : min= 192, max= 280, avg=252.68, stdev=19.01, samples=19 00:18:56.613 lat (msec) : 10=0.59%, 20=0.43%, 50=29.70%, 100=67.08%, 250=2.19% 00:18:56.613 cpu : usr=45.00%, sys=2.35%, ctx=1636, majf=0, minf=9 00:18:56.613 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=81.7%, 16=15.4%, 32=0.0%, >=64=0.0% 00:18:56.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.613 complete : 0=0.0%, 4=87.2%, 8=12.3%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.613 issued rwts: total=2552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.613 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.613 filename2: (groupid=0, jobs=1): err= 0: pid=75025: Sat Dec 14 06:49:10 2024 00:18:56.613 read: IOPS=247, BW=989KiB/s (1012kB/s)(9896KiB/10009msec) 00:18:56.613 slat (usec): min=4, max=8025, avg=26.47, stdev=267.00 00:18:56.613 clat (msec): min=27, max=116, avg=64.59, stdev=16.06 00:18:56.613 lat (msec): min=27, max=116, avg=64.62, stdev=16.06 00:18:56.613 clat percentiles (msec): 00:18:56.613 | 1.00th=[ 37], 5.00th=[ 43], 10.00th=[ 46], 20.00th=[ 48], 00:18:56.613 | 30.00th=[ 53], 40.00th=[ 61], 50.00th=[ 66], 60.00th=[ 70], 00:18:56.613 | 70.00th=[ 72], 80.00th=[ 77], 90.00th=[ 85], 95.00th=[ 95], 00:18:56.613 | 99.00th=[ 109], 99.50th=[ 112], 99.90th=[ 117], 99.95th=[ 117], 00:18:56.613 | 99.99th=[ 117] 00:18:56.613 bw ( KiB/s): min= 768, max= 1104, per=4.20%, avg=986.21, stdev=71.51, samples=19 00:18:56.613 iops : min= 192, max= 276, avg=246.53, stdev=17.89, samples=19 00:18:56.613 lat (msec) : 50=27.16%, 100=70.37%, 250=2.47% 00:18:56.613 cpu : usr=38.74%, sys=1.96%, ctx=1235, majf=0, minf=9 00:18:56.613 IO depths : 1=0.1%, 2=0.6%, 4=2.5%, 8=81.1%, 16=15.6%, 32=0.0%, >=64=0.0% 00:18:56.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.613 complete : 0=0.0%, 4=87.6%, 8=11.8%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.613 issued rwts: total=2474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.613 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.613 filename2: (groupid=0, jobs=1): err= 0: pid=75026: Sat Dec 14 06:49:10 2024 00:18:56.613 read: IOPS=250, BW=1000KiB/s (1024kB/s)(9.77MiB/10005msec) 00:18:56.613 slat (usec): min=4, max=8026, avg=23.12, stdev=240.28 00:18:56.613 clat (msec): min=5, max=132, avg=63.88, stdev=17.20 00:18:56.613 lat (msec): min=5, max=132, avg=63.91, stdev=17.20 00:18:56.613 clat percentiles (msec): 00:18:56.613 | 1.00th=[ 12], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 48], 00:18:56.613 | 30.00th=[ 50], 40.00th=[ 61], 50.00th=[ 66], 60.00th=[ 71], 00:18:56.613 | 70.00th=[ 72], 80.00th=[ 74], 90.00th=[ 85], 95.00th=[ 96], 00:18:56.613 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 121], 99.95th=[ 133], 00:18:56.613 | 99.99th=[ 133] 00:18:56.613 bw ( KiB/s): min= 784, max= 1128, per=4.21%, avg=989.05, stdev=84.64, samples=19 00:18:56.613 iops : min= 196, max= 282, avg=247.21, stdev=21.14, samples=19 00:18:56.613 lat (msec) : 10=0.84%, 20=0.56%, 50=28.74%, 100=67.23%, 250=2.64% 00:18:56.613 cpu : usr=37.16%, sys=1.80%, ctx=1051, majf=0, minf=9 00:18:56.613 IO depths : 1=0.1%, 2=0.9%, 4=3.6%, 8=80.1%, 16=15.4%, 32=0.0%, >=64=0.0% 00:18:56.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.613 complete : 0=0.0%, 4=87.8%, 8=11.4%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.613 issued rwts: total=2502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.613 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.613 filename2: (groupid=0, jobs=1): err= 0: pid=75027: Sat Dec 14 06:49:10 2024 00:18:56.613 read: IOPS=241, BW=966KiB/s (989kB/s)(9684KiB/10026msec) 00:18:56.613 slat (usec): min=3, max=8025, avg=30.65, stdev=333.62 00:18:56.613 clat (msec): min=33, max=131, avg=66.08, stdev=15.98 00:18:56.613 lat (msec): min=33, max=131, avg=66.11, stdev=15.98 00:18:56.613 clat percentiles (msec): 00:18:56.613 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 48], 00:18:56.613 | 30.00th=[ 59], 40.00th=[ 62], 50.00th=[ 69], 60.00th=[ 72], 00:18:56.613 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 85], 95.00th=[ 96], 00:18:56.613 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 117], 99.95th=[ 117], 00:18:56.613 | 99.99th=[ 132] 00:18:56.613 bw ( KiB/s): min= 864, max= 1048, per=4.11%, avg=964.35, stdev=54.05, samples=20 00:18:56.613 iops : min= 216, max= 262, avg=241.05, stdev=13.48, samples=20 00:18:56.613 lat (msec) : 50=25.24%, 100=72.57%, 250=2.19% 00:18:56.613 cpu : usr=33.24%, sys=1.82%, ctx=904, majf=0, minf=9 00:18:56.613 IO depths : 1=0.1%, 2=0.7%, 4=2.8%, 8=80.5%, 16=16.0%, 32=0.0%, >=64=0.0% 00:18:56.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.613 complete : 0=0.0%, 4=88.0%, 8=11.4%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.613 issued rwts: total=2421,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.613 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.613 filename2: (groupid=0, jobs=1): err= 0: pid=75028: Sat Dec 14 06:49:10 2024 00:18:56.613 read: IOPS=254, BW=1017KiB/s (1042kB/s)(9.94MiB/10004msec) 00:18:56.613 slat (usec): min=3, max=8031, avg=24.93, stdev=275.07 00:18:56.613 clat (msec): min=3, max=125, avg=62.80, stdev=17.68 00:18:56.613 lat (msec): min=3, max=125, avg=62.82, stdev=17.68 00:18:56.613 clat percentiles (msec): 00:18:56.613 | 1.00th=[ 7], 5.00th=[ 39], 10.00th=[ 45], 20.00th=[ 48], 00:18:56.613 | 30.00th=[ 50], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 71], 00:18:56.613 | 70.00th=[ 72], 80.00th=[ 74], 90.00th=[ 85], 95.00th=[ 95], 00:18:56.613 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 121], 99.95th=[ 126], 00:18:56.613 | 99.99th=[ 126] 00:18:56.613 bw ( KiB/s): min= 848, max= 1152, per=4.26%, avg=1001.26, stdev=79.39, samples=19 00:18:56.613 iops : min= 212, max= 288, avg=250.32, stdev=19.85, samples=19 00:18:56.613 lat (msec) : 4=0.12%, 10=1.10%, 20=0.63%, 50=29.72%, 100=66.04% 00:18:56.613 lat (msec) : 250=2.40% 00:18:56.613 cpu : usr=33.76%, sys=1.82%, ctx=1021, majf=0, minf=9 00:18:56.613 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=81.8%, 16=15.7%, 32=0.0%, >=64=0.0% 00:18:56.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.613 complete : 0=0.0%, 4=87.4%, 8=12.1%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.613 issued rwts: total=2544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.613 latency : target=0, window=0, percentile=100.00%, depth=16 00:18:56.613 00:18:56.613 Run status group 0 (all jobs): 00:18:56.613 READ: bw=22.9MiB/s (24.0MB/s), 914KiB/s-1038KiB/s (936kB/s-1063kB/s), io=230MiB (241MB), run=10001-10044msec 00:18:56.872 06:49:10 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:18:56.872 06:49:10 -- target/dif.sh@43 -- # local sub 00:18:56.872 06:49:10 -- target/dif.sh@45 -- # for sub in "$@" 00:18:56.872 06:49:10 -- target/dif.sh@46 -- # destroy_subsystem 0 00:18:56.872 06:49:10 -- target/dif.sh@36 -- # local sub_id=0 00:18:56.872 06:49:10 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:56.872 06:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.872 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:18:56.872 06:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.872 06:49:10 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:18:56.872 06:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.872 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:18:56.872 06:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.872 06:49:10 -- target/dif.sh@45 -- # for sub in "$@" 00:18:56.872 06:49:10 -- target/dif.sh@46 -- # destroy_subsystem 1 00:18:56.872 06:49:10 -- target/dif.sh@36 -- # local sub_id=1 00:18:56.872 06:49:10 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:56.872 06:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.872 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:18:56.872 06:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.872 06:49:10 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:18:56.872 06:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.872 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:18:56.872 06:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.872 06:49:10 -- target/dif.sh@45 -- # for sub in "$@" 00:18:56.872 06:49:10 -- target/dif.sh@46 -- # destroy_subsystem 2 00:18:56.872 06:49:10 -- target/dif.sh@36 -- # local sub_id=2 00:18:56.872 06:49:10 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:56.872 06:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.872 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:18:56.872 06:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.872 06:49:10 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:18:56.872 06:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.872 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:18:56.872 06:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.872 06:49:10 -- target/dif.sh@115 -- # NULL_DIF=1 00:18:56.872 06:49:10 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:18:56.872 06:49:10 -- target/dif.sh@115 -- # numjobs=2 00:18:56.872 06:49:10 -- target/dif.sh@115 -- # iodepth=8 00:18:56.872 06:49:10 -- target/dif.sh@115 -- # runtime=5 00:18:56.872 06:49:10 -- target/dif.sh@115 -- # files=1 00:18:56.872 06:49:10 -- target/dif.sh@117 -- # create_subsystems 0 1 00:18:56.872 06:49:10 -- target/dif.sh@28 -- # local sub 00:18:56.872 06:49:10 -- target/dif.sh@30 -- # for sub in "$@" 00:18:56.872 06:49:10 -- target/dif.sh@31 -- # create_subsystem 0 00:18:56.872 06:49:10 -- target/dif.sh@18 -- # local sub_id=0 00:18:56.872 06:49:10 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:18:56.872 06:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.872 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:18:56.872 bdev_null0 00:18:56.872 06:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.872 06:49:10 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:18:56.872 06:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.872 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:18:56.872 06:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.872 06:49:10 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:18:56.872 06:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.872 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:18:56.872 06:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.872 06:49:10 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:56.872 06:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.872 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:18:56.872 [2024-12-14 06:49:10.843525] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:56.872 06:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.872 06:49:10 -- target/dif.sh@30 -- # for sub in "$@" 00:18:56.872 06:49:10 -- target/dif.sh@31 -- # create_subsystem 1 00:18:56.872 06:49:10 -- target/dif.sh@18 -- # local sub_id=1 00:18:56.872 06:49:10 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:18:56.872 06:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.872 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:18:56.872 bdev_null1 00:18:56.872 06:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.872 06:49:10 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:18:56.872 06:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.872 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:18:57.131 06:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.131 06:49:10 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:18:57.131 06:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.131 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:18:57.131 06:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.131 06:49:10 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:57.131 06:49:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.131 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:18:57.131 06:49:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.131 06:49:10 -- target/dif.sh@118 -- # fio /dev/fd/62 00:18:57.131 06:49:10 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:18:57.131 06:49:10 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:18:57.131 06:49:10 -- nvmf/common.sh@520 -- # config=() 00:18:57.131 06:49:10 -- nvmf/common.sh@520 -- # local subsystem config 00:18:57.131 06:49:10 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:57.131 06:49:10 -- target/dif.sh@82 -- # gen_fio_conf 00:18:57.131 06:49:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:57.131 06:49:10 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:57.131 06:49:10 -- target/dif.sh@54 -- # local file 00:18:57.131 06:49:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:57.131 { 00:18:57.131 "params": { 00:18:57.131 "name": "Nvme$subsystem", 00:18:57.131 "trtype": "$TEST_TRANSPORT", 00:18:57.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:57.131 "adrfam": "ipv4", 00:18:57.131 "trsvcid": "$NVMF_PORT", 00:18:57.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:57.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:57.131 "hdgst": ${hdgst:-false}, 00:18:57.132 "ddgst": ${ddgst:-false} 00:18:57.132 }, 00:18:57.132 "method": "bdev_nvme_attach_controller" 00:18:57.132 } 00:18:57.132 EOF 00:18:57.132 )") 00:18:57.132 06:49:10 -- target/dif.sh@56 -- # cat 00:18:57.132 06:49:10 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:18:57.132 06:49:10 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:57.132 06:49:10 -- common/autotest_common.sh@1328 -- # local sanitizers 00:18:57.132 06:49:10 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:57.132 06:49:10 -- common/autotest_common.sh@1330 -- # shift 00:18:57.132 06:49:10 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:18:57.132 06:49:10 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:57.132 06:49:10 -- nvmf/common.sh@542 -- # cat 00:18:57.132 06:49:10 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:57.132 06:49:10 -- common/autotest_common.sh@1334 -- # grep libasan 00:18:57.132 06:49:10 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:57.132 06:49:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:57.132 06:49:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:57.132 { 00:18:57.132 "params": { 00:18:57.132 "name": "Nvme$subsystem", 00:18:57.132 "trtype": "$TEST_TRANSPORT", 00:18:57.132 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:57.132 "adrfam": "ipv4", 00:18:57.132 "trsvcid": "$NVMF_PORT", 00:18:57.132 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:57.132 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:57.132 "hdgst": ${hdgst:-false}, 00:18:57.132 "ddgst": ${ddgst:-false} 00:18:57.132 }, 00:18:57.132 "method": "bdev_nvme_attach_controller" 00:18:57.132 } 00:18:57.132 EOF 00:18:57.132 )") 00:18:57.132 06:49:10 -- target/dif.sh@72 -- # (( file = 1 )) 00:18:57.132 06:49:10 -- target/dif.sh@72 -- # (( file <= files )) 00:18:57.132 06:49:10 -- target/dif.sh@73 -- # cat 00:18:57.132 06:49:10 -- nvmf/common.sh@542 -- # cat 00:18:57.132 06:49:10 -- target/dif.sh@72 -- # (( file++ )) 00:18:57.132 06:49:10 -- target/dif.sh@72 -- # (( file <= files )) 00:18:57.132 06:49:10 -- nvmf/common.sh@544 -- # jq . 00:18:57.132 06:49:10 -- nvmf/common.sh@545 -- # IFS=, 00:18:57.132 06:49:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:57.132 "params": { 00:18:57.132 "name": "Nvme0", 00:18:57.132 "trtype": "tcp", 00:18:57.132 "traddr": "10.0.0.2", 00:18:57.132 "adrfam": "ipv4", 00:18:57.132 "trsvcid": "4420", 00:18:57.132 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:57.132 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:57.132 "hdgst": false, 00:18:57.132 "ddgst": false 00:18:57.132 }, 00:18:57.132 "method": "bdev_nvme_attach_controller" 00:18:57.132 },{ 00:18:57.132 "params": { 00:18:57.132 "name": "Nvme1", 00:18:57.132 "trtype": "tcp", 00:18:57.132 "traddr": "10.0.0.2", 00:18:57.132 "adrfam": "ipv4", 00:18:57.132 "trsvcid": "4420", 00:18:57.132 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.132 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:57.132 "hdgst": false, 00:18:57.132 "ddgst": false 00:18:57.132 }, 00:18:57.132 "method": "bdev_nvme_attach_controller" 00:18:57.132 }' 00:18:57.132 06:49:10 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:57.132 06:49:10 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:57.132 06:49:10 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:18:57.132 06:49:10 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:57.132 06:49:10 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:18:57.132 06:49:10 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:18:57.132 06:49:10 -- common/autotest_common.sh@1334 -- # asan_lib= 00:18:57.132 06:49:10 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:18:57.132 06:49:10 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:57.132 06:49:10 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:18:57.132 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:18:57.132 ... 00:18:57.132 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:18:57.132 ... 00:18:57.132 fio-3.35 00:18:57.132 Starting 4 threads 00:18:57.736 [2024-12-14 06:49:11.469086] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:57.736 [2024-12-14 06:49:11.469177] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:03.020 00:19:03.020 filename0: (groupid=0, jobs=1): err= 0: pid=75177: Sat Dec 14 06:49:16 2024 00:19:03.020 read: IOPS=2242, BW=17.5MiB/s (18.4MB/s)(87.6MiB/5001msec) 00:19:03.020 slat (nsec): min=7009, max=70975, avg=15128.53, stdev=4687.01 00:19:03.020 clat (usec): min=948, max=6965, avg=3528.73, stdev=1024.10 00:19:03.020 lat (usec): min=956, max=6980, avg=3543.86, stdev=1023.40 00:19:03.020 clat percentiles (usec): 00:19:03.020 | 1.00th=[ 1991], 5.00th=[ 2024], 10.00th=[ 2212], 20.00th=[ 2540], 00:19:03.020 | 30.00th=[ 2638], 40.00th=[ 2933], 50.00th=[ 3621], 60.00th=[ 4228], 00:19:03.020 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 4752], 95.00th=[ 4817], 00:19:03.020 | 99.00th=[ 4883], 99.50th=[ 4948], 99.90th=[ 5014], 99.95th=[ 5080], 00:19:03.020 | 99.99th=[ 5145] 00:19:03.020 bw ( KiB/s): min=16416, max=18416, per=26.58%, avg=17928.33, stdev=591.22, samples=9 00:19:03.020 iops : min= 2052, max= 2302, avg=2241.00, stdev=73.89, samples=9 00:19:03.020 lat (usec) : 1000=0.03% 00:19:03.020 lat (msec) : 2=1.34%, 4=56.33%, 10=42.31% 00:19:03.020 cpu : usr=91.28%, sys=7.62%, ctx=6, majf=0, minf=9 00:19:03.020 IO depths : 1=0.1%, 2=2.6%, 4=62.3%, 8=35.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:03.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.020 complete : 0=0.0%, 4=99.0%, 8=1.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.020 issued rwts: total=11216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.020 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:03.020 filename0: (groupid=0, jobs=1): err= 0: pid=75178: Sat Dec 14 06:49:16 2024 00:19:03.020 read: IOPS=2201, BW=17.2MiB/s (18.0MB/s)(86.0MiB/5002msec) 00:19:03.020 slat (nsec): min=7085, max=71886, avg=14406.49, stdev=5225.02 00:19:03.020 clat (usec): min=1530, max=7058, avg=3596.96, stdev=1038.39 00:19:03.020 lat (usec): min=1539, max=7084, avg=3611.37, stdev=1037.81 00:19:03.020 clat percentiles (usec): 00:19:03.020 | 1.00th=[ 1991], 5.00th=[ 2040], 10.00th=[ 2245], 20.00th=[ 2540], 00:19:03.020 | 30.00th=[ 2671], 40.00th=[ 2933], 50.00th=[ 3752], 60.00th=[ 4293], 00:19:03.020 | 70.00th=[ 4621], 80.00th=[ 4686], 90.00th=[ 4817], 95.00th=[ 4883], 00:19:03.020 | 99.00th=[ 5014], 99.50th=[ 5080], 99.90th=[ 5407], 99.95th=[ 6849], 00:19:03.020 | 99.99th=[ 6915] 00:19:03.020 bw ( KiB/s): min=14768, max=18368, per=26.05%, avg=17568.00, stdev=1229.71, samples=9 00:19:03.020 iops : min= 1846, max= 2296, avg=2196.00, stdev=153.71, samples=9 00:19:03.020 lat (msec) : 2=1.20%, 4=53.63%, 10=45.17% 00:19:03.020 cpu : usr=90.48%, sys=8.40%, ctx=5, majf=0, minf=0 00:19:03.020 IO depths : 1=0.1%, 2=3.8%, 4=61.6%, 8=34.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:03.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.020 complete : 0=0.0%, 4=98.6%, 8=1.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.020 issued rwts: total=11010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.020 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:03.020 filename1: (groupid=0, jobs=1): err= 0: pid=75179: Sat Dec 14 06:49:16 2024 00:19:03.020 read: IOPS=2076, BW=16.2MiB/s (17.0MB/s)(81.1MiB/5003msec) 00:19:03.020 slat (usec): min=3, max=3951, avg=12.05, stdev=38.95 00:19:03.020 clat (usec): min=682, max=13909, avg=3815.79, stdev=1073.19 00:19:03.020 lat (usec): min=690, max=13928, avg=3827.84, stdev=1073.72 00:19:03.020 clat percentiles (usec): 00:19:03.020 | 1.00th=[ 1336], 5.00th=[ 2008], 10.00th=[ 2073], 20.00th=[ 2704], 00:19:03.020 | 30.00th=[ 2966], 40.00th=[ 3687], 50.00th=[ 4293], 60.00th=[ 4621], 00:19:03.020 | 70.00th=[ 4686], 80.00th=[ 4752], 90.00th=[ 4817], 95.00th=[ 4883], 00:19:03.020 | 99.00th=[ 5014], 99.50th=[ 5145], 99.90th=[ 6456], 99.95th=[ 9765], 00:19:03.020 | 99.99th=[ 9765] 00:19:03.020 bw ( KiB/s): min=13312, max=18183, per=24.41%, avg=16466.56, stdev=2102.45, samples=9 00:19:03.020 iops : min= 1664, max= 2272, avg=2058.22, stdev=262.72, samples=9 00:19:03.020 lat (usec) : 750=0.02%, 1000=0.07% 00:19:03.020 lat (msec) : 2=4.22%, 4=42.11%, 10=53.58%, 20=0.01% 00:19:03.021 cpu : usr=90.40%, sys=8.26%, ctx=125, majf=0, minf=9 00:19:03.021 IO depths : 1=0.1%, 2=7.9%, 4=59.3%, 8=32.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:03.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.021 complete : 0=0.0%, 4=97.0%, 8=3.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.021 issued rwts: total=10387,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.021 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:03.021 filename1: (groupid=0, jobs=1): err= 0: pid=75180: Sat Dec 14 06:49:16 2024 00:19:03.021 read: IOPS=1912, BW=14.9MiB/s (15.7MB/s)(74.7MiB/5002msec) 00:19:03.021 slat (usec): min=7, max=191, avg=13.68, stdev= 6.37 00:19:03.021 clat (usec): min=1773, max=6961, avg=4139.03, stdev=967.17 00:19:03.021 lat (usec): min=1787, max=6976, avg=4152.71, stdev=965.05 00:19:03.021 clat percentiles (usec): 00:19:03.021 | 1.00th=[ 2089], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2671], 00:19:03.021 | 30.00th=[ 3851], 40.00th=[ 4490], 50.00th=[ 4686], 60.00th=[ 4817], 00:19:03.021 | 70.00th=[ 4817], 80.00th=[ 4817], 90.00th=[ 4883], 95.00th=[ 4948], 00:19:03.021 | 99.00th=[ 5276], 99.50th=[ 5932], 99.90th=[ 6194], 99.95th=[ 6259], 00:19:03.021 | 99.99th=[ 6980] 00:19:03.021 bw ( KiB/s): min=13056, max=18416, per=23.03%, avg=15534.78, stdev=2419.28, samples=9 00:19:03.021 iops : min= 1632, max= 2302, avg=1941.78, stdev=302.49, samples=9 00:19:03.021 lat (msec) : 2=0.29%, 4=32.63%, 10=67.07% 00:19:03.021 cpu : usr=89.06%, sys=9.24%, ctx=108, majf=0, minf=9 00:19:03.021 IO depths : 1=0.1%, 2=14.4%, 4=55.8%, 8=29.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:03.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.021 complete : 0=0.0%, 4=94.5%, 8=5.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:03.021 issued rwts: total=9564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:03.021 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:03.021 00:19:03.021 Run status group 0 (all jobs): 00:19:03.021 READ: bw=65.9MiB/s (69.1MB/s), 14.9MiB/s-17.5MiB/s (15.7MB/s-18.4MB/s), io=330MiB (346MB), run=5001-5003msec 00:19:03.021 06:49:16 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:19:03.021 06:49:16 -- target/dif.sh@43 -- # local sub 00:19:03.021 06:49:16 -- target/dif.sh@45 -- # for sub in "$@" 00:19:03.021 06:49:16 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:03.021 06:49:16 -- target/dif.sh@36 -- # local sub_id=0 00:19:03.021 06:49:16 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:03.021 06:49:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.021 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:19:03.021 06:49:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.021 06:49:16 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:03.021 06:49:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.021 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:19:03.021 06:49:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.021 06:49:16 -- target/dif.sh@45 -- # for sub in "$@" 00:19:03.021 06:49:16 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:03.021 06:49:16 -- target/dif.sh@36 -- # local sub_id=1 00:19:03.021 06:49:16 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:03.021 06:49:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.021 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:19:03.021 06:49:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.021 06:49:16 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:03.021 06:49:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.021 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:19:03.021 06:49:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.021 00:19:03.021 real 0m23.115s 00:19:03.021 user 2m2.873s 00:19:03.021 sys 0m8.328s 00:19:03.021 06:49:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:03.021 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:19:03.021 ************************************ 00:19:03.021 END TEST fio_dif_rand_params 00:19:03.021 ************************************ 00:19:03.021 06:49:16 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:19:03.021 06:49:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:03.021 06:49:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:03.021 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:19:03.021 ************************************ 00:19:03.021 START TEST fio_dif_digest 00:19:03.021 ************************************ 00:19:03.021 06:49:16 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:19:03.021 06:49:16 -- target/dif.sh@123 -- # local NULL_DIF 00:19:03.021 06:49:16 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:19:03.021 06:49:16 -- target/dif.sh@125 -- # local hdgst ddgst 00:19:03.021 06:49:16 -- target/dif.sh@127 -- # NULL_DIF=3 00:19:03.021 06:49:16 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:19:03.021 06:49:16 -- target/dif.sh@127 -- # numjobs=3 00:19:03.021 06:49:16 -- target/dif.sh@127 -- # iodepth=3 00:19:03.021 06:49:16 -- target/dif.sh@127 -- # runtime=10 00:19:03.021 06:49:16 -- target/dif.sh@128 -- # hdgst=true 00:19:03.021 06:49:16 -- target/dif.sh@128 -- # ddgst=true 00:19:03.021 06:49:16 -- target/dif.sh@130 -- # create_subsystems 0 00:19:03.021 06:49:16 -- target/dif.sh@28 -- # local sub 00:19:03.021 06:49:16 -- target/dif.sh@30 -- # for sub in "$@" 00:19:03.021 06:49:16 -- target/dif.sh@31 -- # create_subsystem 0 00:19:03.021 06:49:16 -- target/dif.sh@18 -- # local sub_id=0 00:19:03.021 06:49:16 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:03.021 06:49:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.021 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:19:03.021 bdev_null0 00:19:03.021 06:49:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.021 06:49:16 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:03.021 06:49:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.021 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:19:03.021 06:49:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.021 06:49:16 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:03.021 06:49:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.021 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:19:03.021 06:49:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.021 06:49:16 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:03.021 06:49:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.021 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:19:03.021 [2024-12-14 06:49:16.880944] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:03.021 06:49:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.021 06:49:16 -- target/dif.sh@131 -- # fio /dev/fd/62 00:19:03.021 06:49:16 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:19:03.021 06:49:16 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:03.021 06:49:16 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:03.021 06:49:16 -- nvmf/common.sh@520 -- # config=() 00:19:03.021 06:49:16 -- nvmf/common.sh@520 -- # local subsystem config 00:19:03.021 06:49:16 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:03.021 06:49:16 -- target/dif.sh@82 -- # gen_fio_conf 00:19:03.021 06:49:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:03.021 06:49:16 -- target/dif.sh@54 -- # local file 00:19:03.021 06:49:16 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:03.021 06:49:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:03.021 { 00:19:03.021 "params": { 00:19:03.021 "name": "Nvme$subsystem", 00:19:03.021 "trtype": "$TEST_TRANSPORT", 00:19:03.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:03.021 "adrfam": "ipv4", 00:19:03.021 "trsvcid": "$NVMF_PORT", 00:19:03.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:03.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:03.021 "hdgst": ${hdgst:-false}, 00:19:03.021 "ddgst": ${ddgst:-false} 00:19:03.021 }, 00:19:03.021 "method": "bdev_nvme_attach_controller" 00:19:03.021 } 00:19:03.021 EOF 00:19:03.021 )") 00:19:03.021 06:49:16 -- target/dif.sh@56 -- # cat 00:19:03.021 06:49:16 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:03.021 06:49:16 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:03.021 06:49:16 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:03.021 06:49:16 -- common/autotest_common.sh@1330 -- # shift 00:19:03.021 06:49:16 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:03.021 06:49:16 -- nvmf/common.sh@542 -- # cat 00:19:03.021 06:49:16 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:03.021 06:49:16 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:03.021 06:49:16 -- target/dif.sh@72 -- # (( file <= files )) 00:19:03.021 06:49:16 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:03.021 06:49:16 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:03.021 06:49:16 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:03.021 06:49:16 -- nvmf/common.sh@544 -- # jq . 00:19:03.021 06:49:16 -- nvmf/common.sh@545 -- # IFS=, 00:19:03.021 06:49:16 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:03.021 "params": { 00:19:03.021 "name": "Nvme0", 00:19:03.021 "trtype": "tcp", 00:19:03.021 "traddr": "10.0.0.2", 00:19:03.021 "adrfam": "ipv4", 00:19:03.021 "trsvcid": "4420", 00:19:03.021 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:03.021 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:03.021 "hdgst": true, 00:19:03.021 "ddgst": true 00:19:03.021 }, 00:19:03.021 "method": "bdev_nvme_attach_controller" 00:19:03.021 }' 00:19:03.021 06:49:16 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:03.021 06:49:16 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:03.021 06:49:16 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:03.021 06:49:16 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:03.021 06:49:16 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:03.021 06:49:16 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:03.021 06:49:16 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:03.021 06:49:16 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:03.021 06:49:16 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:03.021 06:49:16 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:03.280 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:03.280 ... 00:19:03.280 fio-3.35 00:19:03.280 Starting 3 threads 00:19:03.538 [2024-12-14 06:49:17.416928] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:03.538 [2024-12-14 06:49:17.417014] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:15.745 00:19:15.745 filename0: (groupid=0, jobs=1): err= 0: pid=75292: Sat Dec 14 06:49:27 2024 00:19:15.745 read: IOPS=231, BW=29.0MiB/s (30.4MB/s)(290MiB/10007msec) 00:19:15.745 slat (nsec): min=7119, max=80885, avg=15393.44, stdev=5735.38 00:19:15.745 clat (usec): min=11766, max=16039, avg=12911.86, stdev=410.86 00:19:15.745 lat (usec): min=11780, max=16065, avg=12927.26, stdev=411.30 00:19:15.745 clat percentiles (usec): 00:19:15.745 | 1.00th=[12125], 5.00th=[12256], 10.00th=[12387], 20.00th=[12518], 00:19:15.745 | 30.00th=[12780], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:19:15.745 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13435], 95.00th=[13566], 00:19:15.745 | 99.00th=[13698], 99.50th=[13829], 99.90th=[16057], 99.95th=[16057], 00:19:15.745 | 99.99th=[16057] 00:19:15.745 bw ( KiB/s): min=28416, max=30720, per=33.32%, avg=29650.00, stdev=571.44, samples=19 00:19:15.745 iops : min= 222, max= 240, avg=231.53, stdev= 4.46, samples=19 00:19:15.745 lat (msec) : 20=100.00% 00:19:15.745 cpu : usr=91.58%, sys=7.83%, ctx=36, majf=0, minf=9 00:19:15.745 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.745 issued rwts: total=2319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.745 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:15.745 filename0: (groupid=0, jobs=1): err= 0: pid=75293: Sat Dec 14 06:49:27 2024 00:19:15.745 read: IOPS=231, BW=29.0MiB/s (30.4MB/s)(290MiB/10005msec) 00:19:15.745 slat (nsec): min=7209, max=80700, avg=16088.19, stdev=5276.38 00:19:15.745 clat (usec): min=11722, max=13953, avg=12905.94, stdev=393.97 00:19:15.745 lat (usec): min=11729, max=13981, avg=12922.03, stdev=394.34 00:19:15.745 clat percentiles (usec): 00:19:15.745 | 1.00th=[12125], 5.00th=[12256], 10.00th=[12387], 20.00th=[12518], 00:19:15.745 | 30.00th=[12780], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:19:15.745 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13435], 95.00th=[13566], 00:19:15.745 | 99.00th=[13698], 99.50th=[13829], 99.90th=[13960], 99.95th=[13960], 00:19:15.745 | 99.99th=[13960] 00:19:15.745 bw ( KiB/s): min=28416, max=30720, per=33.33%, avg=29662.68, stdev=635.07, samples=19 00:19:15.746 iops : min= 222, max= 240, avg=231.68, stdev= 4.94, samples=19 00:19:15.746 lat (msec) : 20=100.00% 00:19:15.746 cpu : usr=91.78%, sys=7.63%, ctx=6, majf=0, minf=0 00:19:15.746 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.746 issued rwts: total=2319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.746 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:15.746 filename0: (groupid=0, jobs=1): err= 0: pid=75294: Sat Dec 14 06:49:27 2024 00:19:15.746 read: IOPS=231, BW=29.0MiB/s (30.4MB/s)(290MiB/10006msec) 00:19:15.746 slat (nsec): min=7427, max=97464, avg=16107.66, stdev=5533.53 00:19:15.746 clat (usec): min=11763, max=14500, avg=12907.84, stdev=398.93 00:19:15.746 lat (usec): min=11779, max=14525, avg=12923.95, stdev=399.29 00:19:15.746 clat percentiles (usec): 00:19:15.746 | 1.00th=[12125], 5.00th=[12256], 10.00th=[12387], 20.00th=[12518], 00:19:15.746 | 30.00th=[12780], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:19:15.746 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13435], 95.00th=[13566], 00:19:15.746 | 99.00th=[13698], 99.50th=[13829], 99.90th=[14484], 99.95th=[14484], 00:19:15.746 | 99.99th=[14484] 00:19:15.746 bw ( KiB/s): min=28472, max=30720, per=33.32%, avg=29652.95, stdev=564.83, samples=19 00:19:15.746 iops : min= 222, max= 240, avg=231.53, stdev= 4.46, samples=19 00:19:15.746 lat (msec) : 20=100.00% 00:19:15.746 cpu : usr=92.57%, sys=6.80%, ctx=17, majf=0, minf=9 00:19:15.746 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.746 issued rwts: total=2319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.746 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:15.746 00:19:15.746 Run status group 0 (all jobs): 00:19:15.746 READ: bw=86.9MiB/s (91.1MB/s), 29.0MiB/s-29.0MiB/s (30.4MB/s-30.4MB/s), io=870MiB (912MB), run=10005-10007msec 00:19:15.746 06:49:27 -- target/dif.sh@132 -- # destroy_subsystems 0 00:19:15.746 06:49:27 -- target/dif.sh@43 -- # local sub 00:19:15.746 06:49:27 -- target/dif.sh@45 -- # for sub in "$@" 00:19:15.746 06:49:27 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:15.746 06:49:27 -- target/dif.sh@36 -- # local sub_id=0 00:19:15.746 06:49:27 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:15.746 06:49:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.746 06:49:27 -- common/autotest_common.sh@10 -- # set +x 00:19:15.746 06:49:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.746 06:49:27 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:15.746 06:49:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.746 06:49:27 -- common/autotest_common.sh@10 -- # set +x 00:19:15.746 06:49:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.746 00:19:15.746 real 0m10.882s 00:19:15.746 user 0m28.173s 00:19:15.746 sys 0m2.446s 00:19:15.746 06:49:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:15.746 ************************************ 00:19:15.746 END TEST fio_dif_digest 00:19:15.746 ************************************ 00:19:15.746 06:49:27 -- common/autotest_common.sh@10 -- # set +x 00:19:15.746 06:49:27 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:19:15.746 06:49:27 -- target/dif.sh@147 -- # nvmftestfini 00:19:15.746 06:49:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:15.746 06:49:27 -- nvmf/common.sh@116 -- # sync 00:19:15.746 06:49:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:15.746 06:49:27 -- nvmf/common.sh@119 -- # set +e 00:19:15.746 06:49:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:15.746 06:49:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:15.746 rmmod nvme_tcp 00:19:15.746 rmmod nvme_fabrics 00:19:15.746 rmmod nvme_keyring 00:19:15.746 06:49:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:15.746 06:49:27 -- nvmf/common.sh@123 -- # set -e 00:19:15.746 06:49:27 -- nvmf/common.sh@124 -- # return 0 00:19:15.746 06:49:27 -- nvmf/common.sh@477 -- # '[' -n 74525 ']' 00:19:15.746 06:49:27 -- nvmf/common.sh@478 -- # killprocess 74525 00:19:15.746 06:49:27 -- common/autotest_common.sh@936 -- # '[' -z 74525 ']' 00:19:15.746 06:49:27 -- common/autotest_common.sh@940 -- # kill -0 74525 00:19:15.746 06:49:27 -- common/autotest_common.sh@941 -- # uname 00:19:15.746 06:49:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:15.746 06:49:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74525 00:19:15.746 06:49:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:15.746 06:49:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:15.746 killing process with pid 74525 00:19:15.746 06:49:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74525' 00:19:15.746 06:49:27 -- common/autotest_common.sh@955 -- # kill 74525 00:19:15.746 06:49:27 -- common/autotest_common.sh@960 -- # wait 74525 00:19:15.746 06:49:28 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:19:15.746 06:49:28 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:15.746 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:15.746 Waiting for block devices as requested 00:19:15.746 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:19:15.746 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:19:15.746 06:49:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:15.746 06:49:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:15.746 06:49:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:15.746 06:49:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:15.746 06:49:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.746 06:49:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:15.746 06:49:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.746 06:49:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:15.746 00:19:15.746 real 0m59.170s 00:19:15.746 user 3m46.825s 00:19:15.746 sys 0m19.176s 00:19:15.746 06:49:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:15.746 06:49:28 -- common/autotest_common.sh@10 -- # set +x 00:19:15.746 ************************************ 00:19:15.746 END TEST nvmf_dif 00:19:15.746 ************************************ 00:19:15.746 06:49:28 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:15.746 06:49:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:15.746 06:49:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:15.746 06:49:28 -- common/autotest_common.sh@10 -- # set +x 00:19:15.746 ************************************ 00:19:15.746 START TEST nvmf_abort_qd_sizes 00:19:15.746 ************************************ 00:19:15.746 06:49:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:19:15.746 * Looking for test storage... 00:19:15.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:15.746 06:49:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:15.746 06:49:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:15.746 06:49:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:15.746 06:49:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:15.746 06:49:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:15.746 06:49:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:15.746 06:49:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:15.746 06:49:28 -- scripts/common.sh@335 -- # IFS=.-: 00:19:15.746 06:49:28 -- scripts/common.sh@335 -- # read -ra ver1 00:19:15.746 06:49:28 -- scripts/common.sh@336 -- # IFS=.-: 00:19:15.746 06:49:28 -- scripts/common.sh@336 -- # read -ra ver2 00:19:15.746 06:49:28 -- scripts/common.sh@337 -- # local 'op=<' 00:19:15.746 06:49:28 -- scripts/common.sh@339 -- # ver1_l=2 00:19:15.746 06:49:28 -- scripts/common.sh@340 -- # ver2_l=1 00:19:15.746 06:49:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:15.746 06:49:28 -- scripts/common.sh@343 -- # case "$op" in 00:19:15.746 06:49:28 -- scripts/common.sh@344 -- # : 1 00:19:15.746 06:49:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:15.746 06:49:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:15.746 06:49:28 -- scripts/common.sh@364 -- # decimal 1 00:19:15.746 06:49:28 -- scripts/common.sh@352 -- # local d=1 00:19:15.746 06:49:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:15.746 06:49:28 -- scripts/common.sh@354 -- # echo 1 00:19:15.746 06:49:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:15.746 06:49:28 -- scripts/common.sh@365 -- # decimal 2 00:19:15.746 06:49:28 -- scripts/common.sh@352 -- # local d=2 00:19:15.746 06:49:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:15.746 06:49:28 -- scripts/common.sh@354 -- # echo 2 00:19:15.746 06:49:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:15.746 06:49:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:15.746 06:49:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:15.746 06:49:28 -- scripts/common.sh@367 -- # return 0 00:19:15.746 06:49:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:15.746 06:49:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:15.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.746 --rc genhtml_branch_coverage=1 00:19:15.746 --rc genhtml_function_coverage=1 00:19:15.746 --rc genhtml_legend=1 00:19:15.746 --rc geninfo_all_blocks=1 00:19:15.746 --rc geninfo_unexecuted_blocks=1 00:19:15.746 00:19:15.746 ' 00:19:15.746 06:49:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:15.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.747 --rc genhtml_branch_coverage=1 00:19:15.747 --rc genhtml_function_coverage=1 00:19:15.747 --rc genhtml_legend=1 00:19:15.747 --rc geninfo_all_blocks=1 00:19:15.747 --rc geninfo_unexecuted_blocks=1 00:19:15.747 00:19:15.747 ' 00:19:15.747 06:49:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:15.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.747 --rc genhtml_branch_coverage=1 00:19:15.747 --rc genhtml_function_coverage=1 00:19:15.747 --rc genhtml_legend=1 00:19:15.747 --rc geninfo_all_blocks=1 00:19:15.747 --rc geninfo_unexecuted_blocks=1 00:19:15.747 00:19:15.747 ' 00:19:15.747 06:49:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:15.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.747 --rc genhtml_branch_coverage=1 00:19:15.747 --rc genhtml_function_coverage=1 00:19:15.747 --rc genhtml_legend=1 00:19:15.747 --rc geninfo_all_blocks=1 00:19:15.747 --rc geninfo_unexecuted_blocks=1 00:19:15.747 00:19:15.747 ' 00:19:15.747 06:49:28 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:15.747 06:49:28 -- nvmf/common.sh@7 -- # uname -s 00:19:15.747 06:49:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:15.747 06:49:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:15.747 06:49:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:15.747 06:49:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:15.747 06:49:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:15.747 06:49:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:15.747 06:49:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:15.747 06:49:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:15.747 06:49:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:15.747 06:49:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:15.747 06:49:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 00:19:15.747 06:49:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=1897a557-42a7-4044-982a-fbab8b2b3e32 00:19:15.747 06:49:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:15.747 06:49:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:15.747 06:49:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:15.747 06:49:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:15.747 06:49:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:15.747 06:49:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:15.747 06:49:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:15.747 06:49:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.747 06:49:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.747 06:49:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.747 06:49:28 -- paths/export.sh@5 -- # export PATH 00:19:15.747 06:49:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.747 06:49:28 -- nvmf/common.sh@46 -- # : 0 00:19:15.747 06:49:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:15.747 06:49:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:15.747 06:49:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:15.747 06:49:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:15.747 06:49:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:15.747 06:49:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:15.747 06:49:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:15.747 06:49:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:15.747 06:49:28 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:19:15.747 06:49:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:15.747 06:49:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:15.747 06:49:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:15.747 06:49:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:15.747 06:49:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:15.747 06:49:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.747 06:49:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:15.747 06:49:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.747 06:49:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:15.747 06:49:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:15.747 06:49:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:15.747 06:49:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:15.747 06:49:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:15.747 06:49:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:15.747 06:49:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:15.747 06:49:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:15.747 06:49:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:15.747 06:49:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:15.747 06:49:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:15.747 06:49:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:15.747 06:49:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:15.747 06:49:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:15.747 06:49:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:15.747 06:49:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:15.747 06:49:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:15.747 06:49:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:15.747 06:49:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:15.747 06:49:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:15.747 Cannot find device "nvmf_tgt_br" 00:19:15.747 06:49:29 -- nvmf/common.sh@154 -- # true 00:19:15.747 06:49:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:15.747 Cannot find device "nvmf_tgt_br2" 00:19:15.747 06:49:29 -- nvmf/common.sh@155 -- # true 00:19:15.747 06:49:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:15.747 06:49:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:15.747 Cannot find device "nvmf_tgt_br" 00:19:15.747 06:49:29 -- nvmf/common.sh@157 -- # true 00:19:15.747 06:49:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:15.747 Cannot find device "nvmf_tgt_br2" 00:19:15.747 06:49:29 -- nvmf/common.sh@158 -- # true 00:19:15.747 06:49:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:15.747 06:49:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:15.747 06:49:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:15.747 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:15.747 06:49:29 -- nvmf/common.sh@161 -- # true 00:19:15.747 06:49:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:15.747 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:15.747 06:49:29 -- nvmf/common.sh@162 -- # true 00:19:15.747 06:49:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:15.747 06:49:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:15.747 06:49:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:15.747 06:49:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:15.747 06:49:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:15.747 06:49:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:15.747 06:49:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:15.747 06:49:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:15.747 06:49:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:15.747 06:49:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:15.747 06:49:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:15.747 06:49:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:15.747 06:49:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:15.747 06:49:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:15.747 06:49:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:15.747 06:49:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:15.747 06:49:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:15.747 06:49:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:15.747 06:49:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:15.747 06:49:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:15.747 06:49:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:15.747 06:49:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:15.747 06:49:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:15.747 06:49:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:15.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:15.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:19:15.747 00:19:15.747 --- 10.0.0.2 ping statistics --- 00:19:15.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.747 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:19:15.747 06:49:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:15.747 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:15.747 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:19:15.747 00:19:15.747 --- 10.0.0.3 ping statistics --- 00:19:15.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.747 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:19:15.747 06:49:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:15.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:15.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:19:15.748 00:19:15.748 --- 10.0.0.1 ping statistics --- 00:19:15.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:15.748 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:19:15.748 06:49:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:15.748 06:49:29 -- nvmf/common.sh@421 -- # return 0 00:19:15.748 06:49:29 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:19:15.748 06:49:29 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:16.006 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:16.265 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:19:16.265 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:19:16.265 06:49:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:16.265 06:49:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:16.265 06:49:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:16.265 06:49:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:16.265 06:49:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:16.265 06:49:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:16.265 06:49:30 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:19:16.265 06:49:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:16.265 06:49:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:16.265 06:49:30 -- common/autotest_common.sh@10 -- # set +x 00:19:16.265 06:49:30 -- nvmf/common.sh@469 -- # nvmfpid=75899 00:19:16.265 06:49:30 -- nvmf/common.sh@470 -- # waitforlisten 75899 00:19:16.265 06:49:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:19:16.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.265 06:49:30 -- common/autotest_common.sh@829 -- # '[' -z 75899 ']' 00:19:16.265 06:49:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.265 06:49:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:16.265 06:49:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.265 06:49:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:16.265 06:49:30 -- common/autotest_common.sh@10 -- # set +x 00:19:16.265 [2024-12-14 06:49:30.241355] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:16.265 [2024-12-14 06:49:30.241491] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.528 [2024-12-14 06:49:30.381211] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:16.528 [2024-12-14 06:49:30.456293] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:16.528 [2024-12-14 06:49:30.456689] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.528 [2024-12-14 06:49:30.456807] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.528 [2024-12-14 06:49:30.456945] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.528 [2024-12-14 06:49:30.457181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.528 [2024-12-14 06:49:30.457623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.528 [2024-12-14 06:49:30.457747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:16.528 [2024-12-14 06:49:30.457827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.463 06:49:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:17.463 06:49:31 -- common/autotest_common.sh@862 -- # return 0 00:19:17.463 06:49:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:17.463 06:49:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:17.463 06:49:31 -- common/autotest_common.sh@10 -- # set +x 00:19:17.463 06:49:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:17.463 06:49:31 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:19:17.463 06:49:31 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:19:17.463 06:49:31 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:19:17.463 06:49:31 -- scripts/common.sh@311 -- # local bdf bdfs 00:19:17.463 06:49:31 -- scripts/common.sh@312 -- # local nvmes 00:19:17.463 06:49:31 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:19:17.463 06:49:31 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:19:17.463 06:49:31 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:19:17.463 06:49:31 -- scripts/common.sh@297 -- # local bdf= 00:19:17.463 06:49:31 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:19:17.463 06:49:31 -- scripts/common.sh@232 -- # local class 00:19:17.463 06:49:31 -- scripts/common.sh@233 -- # local subclass 00:19:17.463 06:49:31 -- scripts/common.sh@234 -- # local progif 00:19:17.463 06:49:31 -- scripts/common.sh@235 -- # printf %02x 1 00:19:17.463 06:49:31 -- scripts/common.sh@235 -- # class=01 00:19:17.463 06:49:31 -- scripts/common.sh@236 -- # printf %02x 8 00:19:17.463 06:49:31 -- scripts/common.sh@236 -- # subclass=08 00:19:17.463 06:49:31 -- scripts/common.sh@237 -- # printf %02x 2 00:19:17.463 06:49:31 -- scripts/common.sh@237 -- # progif=02 00:19:17.463 06:49:31 -- scripts/common.sh@239 -- # hash lspci 00:19:17.463 06:49:31 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:19:17.463 06:49:31 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:19:17.463 06:49:31 -- scripts/common.sh@242 -- # grep -i -- -p02 00:19:17.463 06:49:31 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:19:17.463 06:49:31 -- scripts/common.sh@244 -- # tr -d '"' 00:19:17.463 06:49:31 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:17.463 06:49:31 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:19:17.463 06:49:31 -- scripts/common.sh@15 -- # local i 00:19:17.463 06:49:31 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:19:17.463 06:49:31 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:17.463 06:49:31 -- scripts/common.sh@24 -- # return 0 00:19:17.463 06:49:31 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:19:17.463 06:49:31 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:17.463 06:49:31 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:19:17.463 06:49:31 -- scripts/common.sh@15 -- # local i 00:19:17.463 06:49:31 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:19:17.463 06:49:31 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:17.463 06:49:31 -- scripts/common.sh@24 -- # return 0 00:19:17.463 06:49:31 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:19:17.463 06:49:31 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:19:17.463 06:49:31 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:19:17.463 06:49:31 -- scripts/common.sh@322 -- # uname -s 00:19:17.463 06:49:31 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:19:17.463 06:49:31 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:19:17.463 06:49:31 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:19:17.463 06:49:31 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:19:17.463 06:49:31 -- scripts/common.sh@322 -- # uname -s 00:19:17.463 06:49:31 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:19:17.463 06:49:31 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:19:17.463 06:49:31 -- scripts/common.sh@327 -- # (( 2 )) 00:19:17.463 06:49:31 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:19:17.463 06:49:31 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:19:17.463 06:49:31 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:19:17.463 06:49:31 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:19:17.463 06:49:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:17.463 06:49:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:17.463 06:49:31 -- common/autotest_common.sh@10 -- # set +x 00:19:17.463 ************************************ 00:19:17.463 START TEST spdk_target_abort 00:19:17.463 ************************************ 00:19:17.463 06:49:31 -- common/autotest_common.sh@1114 -- # spdk_target 00:19:17.463 06:49:31 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:19:17.463 06:49:31 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:19:17.463 06:49:31 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:19:17.463 06:49:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.463 06:49:31 -- common/autotest_common.sh@10 -- # set +x 00:19:17.463 spdk_targetn1 00:19:17.463 06:49:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.463 06:49:31 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:17.463 06:49:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.463 06:49:31 -- common/autotest_common.sh@10 -- # set +x 00:19:17.463 [2024-12-14 06:49:31.382639] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:17.463 06:49:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.463 06:49:31 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:19:17.463 06:49:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.463 06:49:31 -- common/autotest_common.sh@10 -- # set +x 00:19:17.463 06:49:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.464 06:49:31 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:19:17.464 06:49:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.464 06:49:31 -- common/autotest_common.sh@10 -- # set +x 00:19:17.464 06:49:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.464 06:49:31 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:19:17.464 06:49:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.464 06:49:31 -- common/autotest_common.sh@10 -- # set +x 00:19:17.464 [2024-12-14 06:49:31.414815] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:17.464 06:49:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.464 06:49:31 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:19:17.464 06:49:31 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:19:17.464 06:49:31 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:19:17.464 06:49:31 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:19:17.464 06:49:31 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:19:17.464 06:49:31 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:19:17.464 06:49:31 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:19:17.464 06:49:31 -- target/abort_qd_sizes.sh@24 -- # local target r 00:19:17.464 06:49:31 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:19:17.464 06:49:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:17.464 06:49:31 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:19:17.464 06:49:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:17.464 06:49:31 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:19:17.464 06:49:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:17.464 06:49:31 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:19:17.464 06:49:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:17.464 06:49:31 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:17.464 06:49:31 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:17.464 06:49:31 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:19:17.464 06:49:31 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:17.464 06:49:31 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:19:20.749 Initializing NVMe Controllers 00:19:20.749 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:19:20.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:19:20.749 Initialization complete. Launching workers. 00:19:20.749 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10355, failed: 0 00:19:20.749 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1042, failed to submit 9313 00:19:20.749 success 757, unsuccess 285, failed 0 00:19:20.749 06:49:34 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:20.750 06:49:34 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:19:24.033 Initializing NVMe Controllers 00:19:24.033 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:19:24.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:19:24.033 Initialization complete. Launching workers. 00:19:24.033 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8976, failed: 0 00:19:24.033 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1132, failed to submit 7844 00:19:24.033 success 447, unsuccess 685, failed 0 00:19:24.033 06:49:37 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:24.033 06:49:37 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:19:27.321 Initializing NVMe Controllers 00:19:27.321 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:19:27.321 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:19:27.321 Initialization complete. Launching workers. 00:19:27.321 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31288, failed: 0 00:19:27.321 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2359, failed to submit 28929 00:19:27.321 success 491, unsuccess 1868, failed 0 00:19:27.321 06:49:41 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:19:27.321 06:49:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.321 06:49:41 -- common/autotest_common.sh@10 -- # set +x 00:19:27.321 06:49:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.321 06:49:41 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:19:27.321 06:49:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.321 06:49:41 -- common/autotest_common.sh@10 -- # set +x 00:19:27.580 06:49:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.580 06:49:41 -- target/abort_qd_sizes.sh@62 -- # killprocess 75899 00:19:27.580 06:49:41 -- common/autotest_common.sh@936 -- # '[' -z 75899 ']' 00:19:27.580 06:49:41 -- common/autotest_common.sh@940 -- # kill -0 75899 00:19:27.580 06:49:41 -- common/autotest_common.sh@941 -- # uname 00:19:27.580 06:49:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:27.580 06:49:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75899 00:19:27.580 killing process with pid 75899 00:19:27.580 06:49:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:27.580 06:49:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:27.580 06:49:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75899' 00:19:27.580 06:49:41 -- common/autotest_common.sh@955 -- # kill 75899 00:19:27.580 06:49:41 -- common/autotest_common.sh@960 -- # wait 75899 00:19:27.839 ************************************ 00:19:27.839 END TEST spdk_target_abort 00:19:27.839 ************************************ 00:19:27.839 00:19:27.839 real 0m10.365s 00:19:27.839 user 0m42.236s 00:19:27.839 sys 0m2.048s 00:19:27.839 06:49:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:27.839 06:49:41 -- common/autotest_common.sh@10 -- # set +x 00:19:27.839 06:49:41 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:19:27.839 06:49:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:27.839 06:49:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:27.839 06:49:41 -- common/autotest_common.sh@10 -- # set +x 00:19:27.839 ************************************ 00:19:27.839 START TEST kernel_target_abort 00:19:27.839 ************************************ 00:19:27.839 06:49:41 -- common/autotest_common.sh@1114 -- # kernel_target 00:19:27.839 06:49:41 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:19:27.840 06:49:41 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:19:27.840 06:49:41 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:19:27.840 06:49:41 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:19:27.840 06:49:41 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:19:27.840 06:49:41 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:19:27.840 06:49:41 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:27.840 06:49:41 -- nvmf/common.sh@627 -- # local block nvme 00:19:27.840 06:49:41 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:19:27.840 06:49:41 -- nvmf/common.sh@630 -- # modprobe nvmet 00:19:27.840 06:49:41 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:27.840 06:49:41 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:28.098 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:28.357 Waiting for block devices as requested 00:19:28.357 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:19:28.357 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:19:28.357 06:49:42 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:19:28.357 06:49:42 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:28.357 06:49:42 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:19:28.357 06:49:42 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:19:28.357 06:49:42 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:28.615 No valid GPT data, bailing 00:19:28.615 06:49:42 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:28.615 06:49:42 -- scripts/common.sh@393 -- # pt= 00:19:28.615 06:49:42 -- scripts/common.sh@394 -- # return 1 00:19:28.615 06:49:42 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:19:28.615 06:49:42 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:19:28.615 06:49:42 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:28.615 06:49:42 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:19:28.615 06:49:42 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:19:28.615 06:49:42 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:28.615 No valid GPT data, bailing 00:19:28.615 06:49:42 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:28.615 06:49:42 -- scripts/common.sh@393 -- # pt= 00:19:28.615 06:49:42 -- scripts/common.sh@394 -- # return 1 00:19:28.615 06:49:42 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:19:28.615 06:49:42 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:19:28.615 06:49:42 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:19:28.615 06:49:42 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:19:28.615 06:49:42 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:19:28.615 06:49:42 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:19:28.615 No valid GPT data, bailing 00:19:28.615 06:49:42 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:19:28.615 06:49:42 -- scripts/common.sh@393 -- # pt= 00:19:28.615 06:49:42 -- scripts/common.sh@394 -- # return 1 00:19:28.615 06:49:42 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:19:28.615 06:49:42 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:19:28.615 06:49:42 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:19:28.615 06:49:42 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:19:28.615 06:49:42 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:19:28.615 06:49:42 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:19:28.874 No valid GPT data, bailing 00:19:28.874 06:49:42 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:19:28.874 06:49:42 -- scripts/common.sh@393 -- # pt= 00:19:28.874 06:49:42 -- scripts/common.sh@394 -- # return 1 00:19:28.874 06:49:42 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:19:28.874 06:49:42 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:19:28.874 06:49:42 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:19:28.874 06:49:42 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:19:28.874 06:49:42 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:28.874 06:49:42 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:19:28.874 06:49:42 -- nvmf/common.sh@654 -- # echo 1 00:19:28.874 06:49:42 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:19:28.874 06:49:42 -- nvmf/common.sh@656 -- # echo 1 00:19:28.874 06:49:42 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:19:28.874 06:49:42 -- nvmf/common.sh@663 -- # echo tcp 00:19:28.874 06:49:42 -- nvmf/common.sh@664 -- # echo 4420 00:19:28.874 06:49:42 -- nvmf/common.sh@665 -- # echo ipv4 00:19:28.874 06:49:42 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:28.874 06:49:42 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1897a557-42a7-4044-982a-fbab8b2b3e32 --hostid=1897a557-42a7-4044-982a-fbab8b2b3e32 -a 10.0.0.1 -t tcp -s 4420 00:19:28.874 00:19:28.874 Discovery Log Number of Records 2, Generation counter 2 00:19:28.874 =====Discovery Log Entry 0====== 00:19:28.874 trtype: tcp 00:19:28.874 adrfam: ipv4 00:19:28.874 subtype: current discovery subsystem 00:19:28.874 treq: not specified, sq flow control disable supported 00:19:28.874 portid: 1 00:19:28.874 trsvcid: 4420 00:19:28.874 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:28.874 traddr: 10.0.0.1 00:19:28.874 eflags: none 00:19:28.874 sectype: none 00:19:28.874 =====Discovery Log Entry 1====== 00:19:28.874 trtype: tcp 00:19:28.874 adrfam: ipv4 00:19:28.874 subtype: nvme subsystem 00:19:28.874 treq: not specified, sq flow control disable supported 00:19:28.874 portid: 1 00:19:28.874 trsvcid: 4420 00:19:28.874 subnqn: kernel_target 00:19:28.874 traddr: 10.0.0.1 00:19:28.874 eflags: none 00:19:28.874 sectype: none 00:19:28.874 06:49:42 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:19:28.874 06:49:42 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:19:28.874 06:49:42 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:19:28.874 06:49:42 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:19:28.874 06:49:42 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:19:28.874 06:49:42 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:19:28.874 06:49:42 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:19:28.874 06:49:42 -- target/abort_qd_sizes.sh@24 -- # local target r 00:19:28.874 06:49:42 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:19:28.874 06:49:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:28.874 06:49:42 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:19:28.874 06:49:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:28.874 06:49:42 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:19:28.874 06:49:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:28.874 06:49:42 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:19:28.874 06:49:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:28.874 06:49:42 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:19:28.874 06:49:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:19:28.874 06:49:42 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:19:28.874 06:49:42 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:28.874 06:49:42 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:19:32.161 Initializing NVMe Controllers 00:19:32.161 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:19:32.161 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:19:32.161 Initialization complete. Launching workers. 00:19:32.161 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 31245, failed: 0 00:19:32.161 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 31245, failed to submit 0 00:19:32.161 success 0, unsuccess 31245, failed 0 00:19:32.161 06:49:45 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:32.161 06:49:45 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:19:35.447 Initializing NVMe Controllers 00:19:35.447 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:19:35.447 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:19:35.447 Initialization complete. Launching workers. 00:19:35.447 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 67798, failed: 0 00:19:35.447 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 29180, failed to submit 38618 00:19:35.447 success 0, unsuccess 29180, failed 0 00:19:35.447 06:49:49 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:19:35.447 06:49:49 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:19:38.733 Initializing NVMe Controllers 00:19:38.733 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:19:38.733 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:19:38.733 Initialization complete. Launching workers. 00:19:38.733 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 75341, failed: 0 00:19:38.733 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 18802, failed to submit 56539 00:19:38.733 success 0, unsuccess 18802, failed 0 00:19:38.733 06:49:52 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:19:38.733 06:49:52 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:19:38.733 06:49:52 -- nvmf/common.sh@677 -- # echo 0 00:19:38.733 06:49:52 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:19:38.733 06:49:52 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:19:38.733 06:49:52 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:38.733 06:49:52 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:19:38.733 06:49:52 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:19:38.733 06:49:52 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:19:38.733 ************************************ 00:19:38.733 END TEST kernel_target_abort 00:19:38.733 ************************************ 00:19:38.733 00:19:38.733 real 0m10.567s 00:19:38.733 user 0m5.760s 00:19:38.733 sys 0m2.258s 00:19:38.733 06:49:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:38.733 06:49:52 -- common/autotest_common.sh@10 -- # set +x 00:19:38.733 06:49:52 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:19:38.733 06:49:52 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:19:38.733 06:49:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:38.733 06:49:52 -- nvmf/common.sh@116 -- # sync 00:19:38.733 06:49:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:38.733 06:49:52 -- nvmf/common.sh@119 -- # set +e 00:19:38.733 06:49:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:38.733 06:49:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:38.733 rmmod nvme_tcp 00:19:38.733 rmmod nvme_fabrics 00:19:38.733 rmmod nvme_keyring 00:19:38.733 06:49:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:38.733 06:49:52 -- nvmf/common.sh@123 -- # set -e 00:19:38.733 06:49:52 -- nvmf/common.sh@124 -- # return 0 00:19:38.733 06:49:52 -- nvmf/common.sh@477 -- # '[' -n 75899 ']' 00:19:38.733 06:49:52 -- nvmf/common.sh@478 -- # killprocess 75899 00:19:38.733 06:49:52 -- common/autotest_common.sh@936 -- # '[' -z 75899 ']' 00:19:38.733 06:49:52 -- common/autotest_common.sh@940 -- # kill -0 75899 00:19:38.733 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (75899) - No such process 00:19:38.733 06:49:52 -- common/autotest_common.sh@963 -- # echo 'Process with pid 75899 is not found' 00:19:38.733 Process with pid 75899 is not found 00:19:38.733 06:49:52 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:19:38.733 06:49:52 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:39.301 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:39.560 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:19:39.560 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:19:39.560 06:49:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:39.560 06:49:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:39.560 06:49:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:39.560 06:49:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:39.560 06:49:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.560 06:49:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:39.560 06:49:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.560 06:49:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:39.560 ************************************ 00:19:39.560 END TEST nvmf_abort_qd_sizes 00:19:39.560 ************************************ 00:19:39.560 00:19:39.560 real 0m24.650s 00:19:39.560 user 0m49.473s 00:19:39.560 sys 0m5.557s 00:19:39.560 06:49:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:39.560 06:49:53 -- common/autotest_common.sh@10 -- # set +x 00:19:39.560 06:49:53 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:19:39.560 06:49:53 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:19:39.560 06:49:53 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:19:39.560 06:49:53 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:39.560 06:49:53 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:19:39.560 06:49:53 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:19:39.560 06:49:53 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:19:39.560 06:49:53 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:39.560 06:49:53 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:19:39.560 06:49:53 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:39.560 06:49:53 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:39.560 06:49:53 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:19:39.560 06:49:53 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:19:39.560 06:49:53 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:19:39.560 06:49:53 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:19:39.560 06:49:53 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:19:39.560 06:49:53 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:19:39.560 06:49:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:39.560 06:49:53 -- common/autotest_common.sh@10 -- # set +x 00:19:39.560 06:49:53 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:19:39.560 06:49:53 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:19:39.560 06:49:53 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:19:39.560 06:49:53 -- common/autotest_common.sh@10 -- # set +x 00:19:41.465 INFO: APP EXITING 00:19:41.465 INFO: killing all VMs 00:19:41.465 INFO: killing vhost app 00:19:41.465 INFO: EXIT DONE 00:19:42.032 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:42.032 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:19:42.032 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:19:42.600 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:42.600 Cleaning 00:19:42.600 Removing: /var/run/dpdk/spdk0/config 00:19:42.600 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:42.600 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:42.600 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:42.600 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:42.600 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:42.600 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:42.600 Removing: /var/run/dpdk/spdk1/config 00:19:42.600 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:19:42.860 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:19:42.860 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:19:42.860 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:19:42.860 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:19:42.860 Removing: /var/run/dpdk/spdk1/hugepage_info 00:19:42.860 Removing: /var/run/dpdk/spdk2/config 00:19:42.860 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:19:42.860 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:19:42.860 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:19:42.860 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:19:42.860 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:19:42.860 Removing: /var/run/dpdk/spdk2/hugepage_info 00:19:42.860 Removing: /var/run/dpdk/spdk3/config 00:19:42.860 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:19:42.860 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:19:42.860 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:19:42.860 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:19:42.860 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:19:42.860 Removing: /var/run/dpdk/spdk3/hugepage_info 00:19:42.860 Removing: /var/run/dpdk/spdk4/config 00:19:42.860 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:19:42.860 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:19:42.860 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:19:42.860 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:19:42.860 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:19:42.860 Removing: /var/run/dpdk/spdk4/hugepage_info 00:19:42.860 Removing: /dev/shm/nvmf_trace.0 00:19:42.860 Removing: /dev/shm/spdk_tgt_trace.pid53806 00:19:42.860 Removing: /var/run/dpdk/spdk0 00:19:42.860 Removing: /var/run/dpdk/spdk1 00:19:42.860 Removing: /var/run/dpdk/spdk2 00:19:42.860 Removing: /var/run/dpdk/spdk3 00:19:42.860 Removing: /var/run/dpdk/spdk4 00:19:42.860 Removing: /var/run/dpdk/spdk_pid53665 00:19:42.860 Removing: /var/run/dpdk/spdk_pid53806 00:19:42.860 Removing: /var/run/dpdk/spdk_pid54064 00:19:42.860 Removing: /var/run/dpdk/spdk_pid54255 00:19:42.860 Removing: /var/run/dpdk/spdk_pid54397 00:19:42.860 Removing: /var/run/dpdk/spdk_pid54474 00:19:42.860 Removing: /var/run/dpdk/spdk_pid54557 00:19:42.860 Removing: /var/run/dpdk/spdk_pid54649 00:19:42.860 Removing: /var/run/dpdk/spdk_pid54728 00:19:42.860 Removing: /var/run/dpdk/spdk_pid54761 00:19:42.860 Removing: /var/run/dpdk/spdk_pid54802 00:19:42.860 Removing: /var/run/dpdk/spdk_pid54865 00:19:42.860 Removing: /var/run/dpdk/spdk_pid54935 00:19:42.860 Removing: /var/run/dpdk/spdk_pid55380 00:19:42.860 Removing: /var/run/dpdk/spdk_pid55427 00:19:42.860 Removing: /var/run/dpdk/spdk_pid55478 00:19:42.860 Removing: /var/run/dpdk/spdk_pid55494 00:19:42.860 Removing: /var/run/dpdk/spdk_pid55556 00:19:42.860 Removing: /var/run/dpdk/spdk_pid55572 00:19:42.860 Removing: /var/run/dpdk/spdk_pid55634 00:19:42.860 Removing: /var/run/dpdk/spdk_pid55650 00:19:42.860 Removing: /var/run/dpdk/spdk_pid55701 00:19:42.860 Removing: /var/run/dpdk/spdk_pid55719 00:19:42.860 Removing: /var/run/dpdk/spdk_pid55759 00:19:42.860 Removing: /var/run/dpdk/spdk_pid55777 00:19:42.860 Removing: /var/run/dpdk/spdk_pid55901 00:19:42.860 Removing: /var/run/dpdk/spdk_pid55936 00:19:42.860 Removing: /var/run/dpdk/spdk_pid56018 00:19:42.860 Removing: /var/run/dpdk/spdk_pid56064 00:19:42.860 Removing: /var/run/dpdk/spdk_pid56094 00:19:42.860 Removing: /var/run/dpdk/spdk_pid56147 00:19:42.860 Removing: /var/run/dpdk/spdk_pid56172 00:19:42.860 Removing: /var/run/dpdk/spdk_pid56201 00:19:42.860 Removing: /var/run/dpdk/spdk_pid56215 00:19:42.860 Removing: /var/run/dpdk/spdk_pid56255 00:19:42.860 Removing: /var/run/dpdk/spdk_pid56269 00:19:42.860 Removing: /var/run/dpdk/spdk_pid56304 00:19:42.860 Removing: /var/run/dpdk/spdk_pid56323 00:19:42.860 Removing: /var/run/dpdk/spdk_pid56352 00:19:42.860 Removing: /var/run/dpdk/spdk_pid56372 00:19:42.860 Removing: /var/run/dpdk/spdk_pid56406 00:19:42.860 Removing: /var/run/dpdk/spdk_pid56420 00:19:42.860 Removing: /var/run/dpdk/spdk_pid56455 00:19:42.860 Removing: /var/run/dpdk/spdk_pid56474 00:19:42.860 Removing: /var/run/dpdk/spdk_pid56509 00:19:42.860 Removing: /var/run/dpdk/spdk_pid56523 00:19:42.860 Removing: /var/run/dpdk/spdk_pid56557 00:19:42.860 Removing: /var/run/dpdk/spdk_pid56577 00:19:42.860 Removing: /var/run/dpdk/spdk_pid56606 00:19:42.860 Removing: /var/run/dpdk/spdk_pid56625 00:19:42.860 Removing: /var/run/dpdk/spdk_pid56660 00:19:42.860 Removing: /var/run/dpdk/spdk_pid56674 00:19:42.860 Removing: /var/run/dpdk/spdk_pid56708 00:19:42.860 Removing: /var/run/dpdk/spdk_pid56728 00:19:43.120 Removing: /var/run/dpdk/spdk_pid56757 00:19:43.120 Removing: /var/run/dpdk/spdk_pid56776 00:19:43.120 Removing: /var/run/dpdk/spdk_pid56811 00:19:43.120 Removing: /var/run/dpdk/spdk_pid56825 00:19:43.120 Removing: /var/run/dpdk/spdk_pid56865 00:19:43.120 Removing: /var/run/dpdk/spdk_pid56879 00:19:43.120 Removing: /var/run/dpdk/spdk_pid56908 00:19:43.120 Removing: /var/run/dpdk/spdk_pid56933 00:19:43.120 Removing: /var/run/dpdk/spdk_pid56962 00:19:43.120 Removing: /var/run/dpdk/spdk_pid56985 00:19:43.120 Removing: /var/run/dpdk/spdk_pid57022 00:19:43.120 Removing: /var/run/dpdk/spdk_pid57039 00:19:43.120 Removing: /var/run/dpdk/spdk_pid57081 00:19:43.120 Removing: /var/run/dpdk/spdk_pid57096 00:19:43.120 Removing: /var/run/dpdk/spdk_pid57131 00:19:43.120 Removing: /var/run/dpdk/spdk_pid57148 00:19:43.120 Removing: /var/run/dpdk/spdk_pid57182 00:19:43.120 Removing: /var/run/dpdk/spdk_pid57259 00:19:43.120 Removing: /var/run/dpdk/spdk_pid57346 00:19:43.120 Removing: /var/run/dpdk/spdk_pid57673 00:19:43.120 Removing: /var/run/dpdk/spdk_pid57690 00:19:43.120 Removing: /var/run/dpdk/spdk_pid57723 00:19:43.120 Removing: /var/run/dpdk/spdk_pid57736 00:19:43.120 Removing: /var/run/dpdk/spdk_pid57749 00:19:43.120 Removing: /var/run/dpdk/spdk_pid57773 00:19:43.120 Removing: /var/run/dpdk/spdk_pid57780 00:19:43.120 Removing: /var/run/dpdk/spdk_pid57799 00:19:43.120 Removing: /var/run/dpdk/spdk_pid57817 00:19:43.120 Removing: /var/run/dpdk/spdk_pid57824 00:19:43.120 Removing: /var/run/dpdk/spdk_pid57843 00:19:43.120 Removing: /var/run/dpdk/spdk_pid57861 00:19:43.120 Removing: /var/run/dpdk/spdk_pid57868 00:19:43.120 Removing: /var/run/dpdk/spdk_pid57887 00:19:43.120 Removing: /var/run/dpdk/spdk_pid57905 00:19:43.120 Removing: /var/run/dpdk/spdk_pid57912 00:19:43.120 Removing: /var/run/dpdk/spdk_pid57931 00:19:43.120 Removing: /var/run/dpdk/spdk_pid57949 00:19:43.120 Removing: /var/run/dpdk/spdk_pid57957 00:19:43.120 Removing: /var/run/dpdk/spdk_pid57976 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58000 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58018 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58040 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58110 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58137 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58146 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58169 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58186 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58188 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58234 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58240 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58272 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58274 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58282 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58289 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58297 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58304 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58311 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58321 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58343 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58374 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58379 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58408 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58417 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58425 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58467 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58479 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58505 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58513 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58520 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58528 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58530 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58543 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58545 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58552 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58633 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58670 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58788 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58816 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58860 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58880 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58895 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58909 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58944 00:19:43.120 Removing: /var/run/dpdk/spdk_pid58959 00:19:43.120 Removing: /var/run/dpdk/spdk_pid59035 00:19:43.379 Removing: /var/run/dpdk/spdk_pid59049 00:19:43.379 Removing: /var/run/dpdk/spdk_pid59096 00:19:43.379 Removing: /var/run/dpdk/spdk_pid59164 00:19:43.379 Removing: /var/run/dpdk/spdk_pid59214 00:19:43.379 Removing: /var/run/dpdk/spdk_pid59239 00:19:43.379 Removing: /var/run/dpdk/spdk_pid59336 00:19:43.379 Removing: /var/run/dpdk/spdk_pid59378 00:19:43.379 Removing: /var/run/dpdk/spdk_pid59415 00:19:43.379 Removing: /var/run/dpdk/spdk_pid59633 00:19:43.379 Removing: /var/run/dpdk/spdk_pid59725 00:19:43.379 Removing: /var/run/dpdk/spdk_pid59758 00:19:43.379 Removing: /var/run/dpdk/spdk_pid60083 00:19:43.379 Removing: /var/run/dpdk/spdk_pid60127 00:19:43.379 Removing: /var/run/dpdk/spdk_pid60439 00:19:43.379 Removing: /var/run/dpdk/spdk_pid60852 00:19:43.379 Removing: /var/run/dpdk/spdk_pid61122 00:19:43.379 Removing: /var/run/dpdk/spdk_pid61908 00:19:43.379 Removing: /var/run/dpdk/spdk_pid62744 00:19:43.379 Removing: /var/run/dpdk/spdk_pid62861 00:19:43.379 Removing: /var/run/dpdk/spdk_pid62923 00:19:43.379 Removing: /var/run/dpdk/spdk_pid64201 00:19:43.379 Removing: /var/run/dpdk/spdk_pid64423 00:19:43.379 Removing: /var/run/dpdk/spdk_pid64743 00:19:43.379 Removing: /var/run/dpdk/spdk_pid64853 00:19:43.379 Removing: /var/run/dpdk/spdk_pid64986 00:19:43.379 Removing: /var/run/dpdk/spdk_pid65008 00:19:43.379 Removing: /var/run/dpdk/spdk_pid65036 00:19:43.379 Removing: /var/run/dpdk/spdk_pid65063 00:19:43.379 Removing: /var/run/dpdk/spdk_pid65159 00:19:43.379 Removing: /var/run/dpdk/spdk_pid65295 00:19:43.379 Removing: /var/run/dpdk/spdk_pid65437 00:19:43.379 Removing: /var/run/dpdk/spdk_pid65512 00:19:43.379 Removing: /var/run/dpdk/spdk_pid65911 00:19:43.379 Removing: /var/run/dpdk/spdk_pid66270 00:19:43.379 Removing: /var/run/dpdk/spdk_pid66272 00:19:43.379 Removing: /var/run/dpdk/spdk_pid68497 00:19:43.379 Removing: /var/run/dpdk/spdk_pid68499 00:19:43.379 Removing: /var/run/dpdk/spdk_pid68785 00:19:43.379 Removing: /var/run/dpdk/spdk_pid68799 00:19:43.379 Removing: /var/run/dpdk/spdk_pid68819 00:19:43.379 Removing: /var/run/dpdk/spdk_pid68844 00:19:43.379 Removing: /var/run/dpdk/spdk_pid68850 00:19:43.379 Removing: /var/run/dpdk/spdk_pid68941 00:19:43.379 Removing: /var/run/dpdk/spdk_pid68943 00:19:43.380 Removing: /var/run/dpdk/spdk_pid69051 00:19:43.380 Removing: /var/run/dpdk/spdk_pid69058 00:19:43.380 Removing: /var/run/dpdk/spdk_pid69166 00:19:43.380 Removing: /var/run/dpdk/spdk_pid69174 00:19:43.380 Removing: /var/run/dpdk/spdk_pid69583 00:19:43.380 Removing: /var/run/dpdk/spdk_pid69627 00:19:43.380 Removing: /var/run/dpdk/spdk_pid69736 00:19:43.380 Removing: /var/run/dpdk/spdk_pid69815 00:19:43.380 Removing: /var/run/dpdk/spdk_pid70127 00:19:43.380 Removing: /var/run/dpdk/spdk_pid70331 00:19:43.380 Removing: /var/run/dpdk/spdk_pid70717 00:19:43.380 Removing: /var/run/dpdk/spdk_pid71254 00:19:43.380 Removing: /var/run/dpdk/spdk_pid71692 00:19:43.380 Removing: /var/run/dpdk/spdk_pid71740 00:19:43.380 Removing: /var/run/dpdk/spdk_pid71787 00:19:43.380 Removing: /var/run/dpdk/spdk_pid71841 00:19:43.380 Removing: /var/run/dpdk/spdk_pid71936 00:19:43.380 Removing: /var/run/dpdk/spdk_pid71996 00:19:43.380 Removing: /var/run/dpdk/spdk_pid72057 00:19:43.380 Removing: /var/run/dpdk/spdk_pid72112 00:19:43.380 Removing: /var/run/dpdk/spdk_pid72451 00:19:43.380 Removing: /var/run/dpdk/spdk_pid73627 00:19:43.380 Removing: /var/run/dpdk/spdk_pid73773 00:19:43.380 Removing: /var/run/dpdk/spdk_pid74023 00:19:43.380 Removing: /var/run/dpdk/spdk_pid74582 00:19:43.380 Removing: /var/run/dpdk/spdk_pid74746 00:19:43.380 Removing: /var/run/dpdk/spdk_pid74904 00:19:43.380 Removing: /var/run/dpdk/spdk_pid75001 00:19:43.380 Removing: /var/run/dpdk/spdk_pid75173 00:19:43.380 Removing: /var/run/dpdk/spdk_pid75288 00:19:43.380 Removing: /var/run/dpdk/spdk_pid75950 00:19:43.380 Removing: /var/run/dpdk/spdk_pid75985 00:19:43.380 Removing: /var/run/dpdk/spdk_pid76020 00:19:43.380 Removing: /var/run/dpdk/spdk_pid76276 00:19:43.380 Removing: /var/run/dpdk/spdk_pid76311 00:19:43.380 Removing: /var/run/dpdk/spdk_pid76346 00:19:43.380 Clean 00:19:43.639 killing process with pid 48053 00:19:43.639 killing process with pid 48054 00:19:43.639 06:49:57 -- common/autotest_common.sh@1446 -- # return 0 00:19:43.639 06:49:57 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:19:43.639 06:49:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:43.639 06:49:57 -- common/autotest_common.sh@10 -- # set +x 00:19:43.639 06:49:57 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:19:43.639 06:49:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:43.639 06:49:57 -- common/autotest_common.sh@10 -- # set +x 00:19:43.639 06:49:57 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:43.639 06:49:57 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:43.639 06:49:57 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:43.639 06:49:57 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:19:43.639 06:49:57 -- spdk/autotest.sh@383 -- # hostname 00:19:43.639 06:49:57 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:43.898 geninfo: WARNING: invalid characters removed from testname! 00:20:10.546 06:50:20 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:10.546 06:50:23 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:12.446 06:50:26 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:14.973 06:50:28 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:17.503 06:50:31 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:20.076 06:50:33 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:22.615 06:50:36 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:22.615 06:50:36 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:20:22.615 06:50:36 -- common/autotest_common.sh@1690 -- $ lcov --version 00:20:22.615 06:50:36 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:20:22.615 06:50:36 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:20:22.615 06:50:36 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:20:22.615 06:50:36 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:20:22.615 06:50:36 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:20:22.615 06:50:36 -- scripts/common.sh@335 -- $ IFS=.-: 00:20:22.615 06:50:36 -- scripts/common.sh@335 -- $ read -ra ver1 00:20:22.615 06:50:36 -- scripts/common.sh@336 -- $ IFS=.-: 00:20:22.615 06:50:36 -- scripts/common.sh@336 -- $ read -ra ver2 00:20:22.615 06:50:36 -- scripts/common.sh@337 -- $ local 'op=<' 00:20:22.615 06:50:36 -- scripts/common.sh@339 -- $ ver1_l=2 00:20:22.615 06:50:36 -- scripts/common.sh@340 -- $ ver2_l=1 00:20:22.615 06:50:36 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:20:22.615 06:50:36 -- scripts/common.sh@343 -- $ case "$op" in 00:20:22.615 06:50:36 -- scripts/common.sh@344 -- $ : 1 00:20:22.615 06:50:36 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:20:22.615 06:50:36 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:22.615 06:50:36 -- scripts/common.sh@364 -- $ decimal 1 00:20:22.615 06:50:36 -- scripts/common.sh@352 -- $ local d=1 00:20:22.615 06:50:36 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:20:22.615 06:50:36 -- scripts/common.sh@354 -- $ echo 1 00:20:22.615 06:50:36 -- scripts/common.sh@364 -- $ ver1[v]=1 00:20:22.615 06:50:36 -- scripts/common.sh@365 -- $ decimal 2 00:20:22.615 06:50:36 -- scripts/common.sh@352 -- $ local d=2 00:20:22.615 06:50:36 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:20:22.615 06:50:36 -- scripts/common.sh@354 -- $ echo 2 00:20:22.615 06:50:36 -- scripts/common.sh@365 -- $ ver2[v]=2 00:20:22.615 06:50:36 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:20:22.615 06:50:36 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:20:22.615 06:50:36 -- scripts/common.sh@367 -- $ return 0 00:20:22.615 06:50:36 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:22.615 06:50:36 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:20:22.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.615 --rc genhtml_branch_coverage=1 00:20:22.615 --rc genhtml_function_coverage=1 00:20:22.615 --rc genhtml_legend=1 00:20:22.615 --rc geninfo_all_blocks=1 00:20:22.615 --rc geninfo_unexecuted_blocks=1 00:20:22.615 00:20:22.615 ' 00:20:22.615 06:50:36 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:20:22.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.615 --rc genhtml_branch_coverage=1 00:20:22.615 --rc genhtml_function_coverage=1 00:20:22.615 --rc genhtml_legend=1 00:20:22.615 --rc geninfo_all_blocks=1 00:20:22.615 --rc geninfo_unexecuted_blocks=1 00:20:22.615 00:20:22.615 ' 00:20:22.615 06:50:36 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:20:22.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.615 --rc genhtml_branch_coverage=1 00:20:22.615 --rc genhtml_function_coverage=1 00:20:22.615 --rc genhtml_legend=1 00:20:22.615 --rc geninfo_all_blocks=1 00:20:22.615 --rc geninfo_unexecuted_blocks=1 00:20:22.615 00:20:22.615 ' 00:20:22.615 06:50:36 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:20:22.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.615 --rc genhtml_branch_coverage=1 00:20:22.615 --rc genhtml_function_coverage=1 00:20:22.615 --rc genhtml_legend=1 00:20:22.616 --rc geninfo_all_blocks=1 00:20:22.616 --rc geninfo_unexecuted_blocks=1 00:20:22.616 00:20:22.616 ' 00:20:22.616 06:50:36 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:22.616 06:50:36 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:20:22.616 06:50:36 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:22.616 06:50:36 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:22.616 06:50:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.616 06:50:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.616 06:50:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.616 06:50:36 -- paths/export.sh@5 -- $ export PATH 00:20:22.616 06:50:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.616 06:50:36 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:20:22.616 06:50:36 -- common/autobuild_common.sh@440 -- $ date +%s 00:20:22.616 06:50:36 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734159036.XXXXXX 00:20:22.616 06:50:36 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734159036.t6R5Sm 00:20:22.616 06:50:36 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:20:22.616 06:50:36 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:20:22.616 06:50:36 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:20:22.616 06:50:36 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:20:22.616 06:50:36 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:20:22.616 06:50:36 -- common/autobuild_common.sh@456 -- $ get_config_params 00:20:22.616 06:50:36 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:20:22.616 06:50:36 -- common/autotest_common.sh@10 -- $ set +x 00:20:22.616 06:50:36 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:20:22.616 06:50:36 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:20:22.616 06:50:36 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:20:22.616 06:50:36 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:20:22.616 06:50:36 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:20:22.616 06:50:36 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:20:22.616 06:50:36 -- spdk/autopackage.sh@19 -- $ timing_finish 00:20:22.616 06:50:36 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:22.616 06:50:36 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:20:22.616 06:50:36 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:22.875 06:50:36 -- spdk/autopackage.sh@20 -- $ exit 0 00:20:22.875 + [[ -n 5232 ]] 00:20:22.875 + sudo kill 5232 00:20:22.884 [Pipeline] } 00:20:22.900 [Pipeline] // timeout 00:20:22.905 [Pipeline] } 00:20:22.919 [Pipeline] // stage 00:20:22.924 [Pipeline] } 00:20:22.939 [Pipeline] // catchError 00:20:22.948 [Pipeline] stage 00:20:22.950 [Pipeline] { (Stop VM) 00:20:22.963 [Pipeline] sh 00:20:23.245 + vagrant halt 00:20:27.438 ==> default: Halting domain... 00:20:32.722 [Pipeline] sh 00:20:33.003 + vagrant destroy -f 00:20:36.330 ==> default: Removing domain... 00:20:36.342 [Pipeline] sh 00:20:36.623 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:20:36.632 [Pipeline] } 00:20:36.646 [Pipeline] // stage 00:20:36.651 [Pipeline] } 00:20:36.665 [Pipeline] // dir 00:20:36.670 [Pipeline] } 00:20:36.683 [Pipeline] // wrap 00:20:36.690 [Pipeline] } 00:20:36.702 [Pipeline] // catchError 00:20:36.711 [Pipeline] stage 00:20:36.713 [Pipeline] { (Epilogue) 00:20:36.724 [Pipeline] sh 00:20:37.005 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:42.290 [Pipeline] catchError 00:20:42.292 [Pipeline] { 00:20:42.304 [Pipeline] sh 00:20:42.585 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:42.585 Artifacts sizes are good 00:20:42.593 [Pipeline] } 00:20:42.607 [Pipeline] // catchError 00:20:42.618 [Pipeline] archiveArtifacts 00:20:42.625 Archiving artifacts 00:20:42.766 [Pipeline] cleanWs 00:20:42.781 [WS-CLEANUP] Deleting project workspace... 00:20:42.781 [WS-CLEANUP] Deferred wipeout is used... 00:20:42.810 [WS-CLEANUP] done 00:20:42.812 [Pipeline] } 00:20:42.826 [Pipeline] // stage 00:20:42.831 [Pipeline] } 00:20:42.844 [Pipeline] // node 00:20:42.850 [Pipeline] End of Pipeline 00:20:42.887 Finished: SUCCESS